roc curves | applied machine learning, part 2 video -凯发k8网页登录
from the series: applied machine learning
use roc curves to assess classification models. roc curves plot the true positive rate vs. the false positive rate for different values of a threshold.
this video walks through several examples that illustrate broadly what roc curves are and why you’d use them. it also outlines interesting scenarios you may encounter when using roc curves.
roc curves are an important tool for assessing classification models. they're also a bit abstract, so let's start by reviewing some simpler ways to assess models.
let's use an example that has to do with the sounds a heart makes. given 71 different features from an audio recording of a heart, we try to classify if the heart sounds normal or abnormal.
one of the easiest metrics to understand is the accuracy of a model – or, in other words, how often it is correct. the accuracy is useful because it’s a single number, making comparisons easy. the classifier i’m looking at right now has an accuracy of 86.3%.
what the accuracy doesn’t tell you is how the model was right or wrong. for that, there’s the confusion matrix, which shows things such as the true positive rate. in this case, it is 74 %, meaning the classifier correctly predicted abnormal heart sounds 74% of the time. we also have the false positive rate of 9%. this is the rate at which the classifier predicted abnormal when the heart sound was actually normal.
the confusion matrix gives results for a single model. but most machine learning models don’t just classify things, they actually calculate probabilities. the confusion matrix for this model shows the result of classifying anything with a probability of >=0.5 as abnormal, and anything with probability <0.5 as normal. but that 0.5 doesn’t have to be fixed, and in fact we could threshold anywhere in the range of probabilities between 0 and 1.
that’s where roc curves come in. the roc curve plots the true positive rate vs. the false positive rate for different values of this threshold.
let’s look at this in more detail.
here’s my model, and i’ll run it on my test data to get the probability of an abnormal heart sound. now let’s start by thresholding these probabilities at 0.5. if i do that, i get a true positive rate of 74% and a false positive rate of 9%.
but what if we wanted to be very conservative, so even if the probability of a heart sound being abnormal was just 10%, we would classify it as abnormal.
if we do that, we get this point.
what if we wanted to be really certain, and only classify sounds with a 90% probability as being abnormal? then we’d get this point, which has a much lower false positive rate, but also a lower true positive rate.
now, if we were to create a bunch of values for this threshold in-between 0 and 1, say 1000 trials evenly spaced, we would get lots of these roc points, and that’s where we get the roc curve from. the roc curve shows us the tradeoff in the true positive rate and false positive rate for varying values of that threshold.
there will always be a point on the roc curve at 0 comma 0. in our case, everything is classified as “normal”. and there will always be a point at 1 comma 1, where everything is classified as “abnormal”.
the area under the curve is a metric for how good our classifier is. a perfect classifier would have an auc of 1. in this example, the auc is 0.926.
in matlab, you don’t need to do all of this by hand like i’ve done here. you can get the roc curve and the auc from the perfcurve function.
now that we have that down, let’s look at some interesting cases for an roc curve:
· if a curve is all the way up and to the left, you have a classifier that for some threshold perfectly labeled every point in the test data, and your auc is 1. you either have a really good classifier, or you may want to be concerned that you don’t have enough data or that your classifier is overfit.
· if a curve is a straight line from the bottom left to the top right, you have a classifier that does no better than a random guess (its auc is 0.5). you may want to try some other types of models or go back to your training data to see if you can engineer some better features.
· if a curve looks kind of jagged, that is sometimes due to the behavior of different types of classifiers. for example, a decision tree only has a finite number of decision nodes, and each of those nodes has a specific probability. the jaggedness comes from when the threshold value we talked about earlier crosses the probability at one of the nodes. jaggedness also commonly comes from gaps in the test data.
as you can see from these examples, roc curves can be a simple, yet nuanced tool for assessing classifier performance.
if you want to learn more about machine learning model assessment, check out the links in the description below.
related products
learn more
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)