model interpretability in matlab video -凯发k8网页登录
from the series: machine learning in finance
interpretable machine learning (or in deep learning, “explainable ai”) provides techniques and algorithms that overcome the black-box nature of ai models. by revealing how various features contribute (or do not contribute) to predictions, you can validate that the model is using the right evidence for its predictions and reveal model biases that were not apparent during training.
get an overview of model interpretability and the use cases it addresses. for engineers and scientists who are interested in adopting machine learning but weary of black-box models, we explain how interpretability can satisfy regulations, build trust in machine learning, and validate that models are working. that’s particularly important in industries like finance and medical devices where regulations set strict guidelines. we provide an overview of interpretability methods for machine learning and how to apply them in matlab®. we demonstrate interpretability in the context of a medical application, classifying heart arrhythmia based on ecg signals.
in recent years, we have seen ai and machine learning algorithms surpass or match human performance in many intelligence tasks, such as medical imaging diagnosis and operating motor vehicles. however, what is missing at the heart of these achievements is an intuitive understanding of how these algorithms work.
this video explains why interpretability is important, what methods exist for interpretability, and demonstrates how to use these techniques in matlab. specifically, we will look at lime, partial dependence plots, and permuted predictor importance algorithms. we will examine interpretability in the context of classifying electrocardiograms. the techniques described can be applied to any model. and a medical background is not required to follow along this video.
why do we need interpretability? to start, machine learning models are not straightforward to understand and more accurate models are usually less interpretable. further, interpretability methods are needed to help navigate regulatory hurdles in the medical, finance, and security industries.
interpretable models are also needed to ensure that they are using the right evidence and reveal biases in the training data. a recent catastrophic use of ai was in credit card scoring where an algorithm reportedly gave higher credit limits to men over women. this could be due to biases in the training data, biases in the real-time data, or something else. interpretive models help us prevent these issues.
for our example, you will apply interpretability to machine learning models trained to classify heartbeats as either abnormal or normal based on ecg data from two publicly available databases. the ecg represents the heart's response to electric stimulation from the sinus note and are typically decomposed into qrs ways. we'll use matlab's wavelet toolbox to automatically extract the location of the qrs waves from the raw signal data. and from there, we extracted eight features from the r-peaks to be used for training.
once we have the features, we can train models quickly using the classification learner. here, we trained a decision tree as an example of an inherently interpretable model, alongside two complex ones. if accuracy were all that matter, it would simply pick the highest performing model. however, in situations such as predicting end of life care, interpretability is of great importance. and we will want to make sure that the model is making predictions using the right evidence and also understand the situations when the model may error.
using matlab's permuted predictor function, we see that for our best performing model, the random forest, the amplitude of the r-waves are included as important predictors. we can then use matlab's partial dependents plots to quantify the effect of the r-amplitude on the model output. we see that as the amplitude approaches 0, this contributes to a 5% change in the probability of outputting an abnormal heartbeat classification.
however, this contradicts our domain knowledge. experts say that r-amplitude levels should have little effect on the classification of a heartbeat. we would want to ensure that these biases in the data are not included in our model. so next, we retrain our models without the amplitudes as predictors. once we have removed the bias, we can see how our new decision tree works on a global level. instead of paying attention to r-amplitudes, the tree considers the rr0 and rr2 intervals to be the most important predictors.
for more complex models like our random forest, we, again, utilize partial dependency plots to see how our most important predictors affect the model. we see that extremely short rr1 intervals generally lead to a higher probability of an abnormal heartbeat classification. intuitively, this makes sense.
we can also use partial dependency plots to compare different models. looking at the same feature for the svm shows that it has a similar trend to our random forest. however, the plot is far smoother, suggesting that the svm is less sensitive to variance and input data, making it a more interpretable model.
beyond understanding how these models work on a global scale, other situations may call for us to understand how they work for individual predictions. lime is a technique that looks at the data points and model predictions around a point of interest. from there, it builds a simple linear model that acts as an approximation for our complex one. the coefficients of our approximate linear model are used as proxies for determining how much each feature contributes to predictions around our point of interest.
let's look at an observation that our svm misclassifies as normal. we see that our value for rr0 in this observation is 0.0528. and from our partial dependency plots earlier, we note that at values around 0.05, the probability of predicting an abnormal heartbeat goes down. we can also see that lime places a high negative weight on rr0. the high value of rr0 and the negative weighting drive down the probability of predicting an abnormal heartbeat, explaining our misclassification.
however, there are some limitations. lime acts as an approximation for our model and is by no means an exact representation of how our model works. to illustrate this, we can see that there are situations where the prediction of our complex model does not match up with the approximation. to avoid this, try running the lime algorithm again with different parameters until the predictions agree, such as increasing the number of important predictors to plot.
we have demonstrated how we can use interpretability techniques in matlab and can now use interpretability to compare different models, reveal data biases, and understand why predictions go wrong. even without a data science background, we can all be a part of the movement to make machine learning explainable. see the links below for more information about any of the techniques introduced in the video. similar interpretability techniques also exist for neural networks, so please be sure to check out those resources as well.
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)