deep learning for computer vision -凯发k8网页登录
overview
while deep learning can achieve state-of-the-art accuracy for object recognition and object detection, it can be difficult to train, evaluate and compare deep learning models. deep learning also requires a significant amount of data and computational resources.
in this webinar, we will explore how matlab® addresses the most common deep learning challenges and gain insight into the procedure for training accurate deep learning models. we will cover new capabilities for deep learning and computer vision for object recognition and object detection.
highlights
we will use real-world examples to demonstrate:
- accessing and managing large sets of images
- using visualization to gain insight into the training process
- leveraging pretrained networks to perform new recognition tasks using transfer learning
- speeding up the training process using gpus and parallel computing toolbox™
about the presenter
johanna pingel joined the mathworks team in 2013, specializing in image processing and computer vision applications with matlab. she has a m.s. degree from rensselaer polytechnic institute and a b.a. degree from carnegie mellon university. she has been working in the computer vision application space for over 5 years, with a focus on object detection and tracking.
recorded: 2 aug 2017
hello, my name is johanna here with gabriel and we're going to talk about deep learning for computer vision. we've got some great new demos and capabilities to show you. so let's get started.
yeah, so we'll start off by setting some context. we've got other deep learning videos up on our web site which are much shorter than this webinar, and you should definitely watch them, as well. but the main thing is that we'll be going into much more depth in this webinar compared to those other videos. we're talking about deep learning for computer vision. what is deep learning? it's a type of machine learning that learns features and tasks directly from data, which could be images, text, or sounds.
since we're discussing computer vision, we'll naturally be looking at image data. but just keep in mind that deep learning applies to many other tasks that don't deal with images.
right. so let's look at a quick workflow of how deep learning works. let's say we have a set of images where each image contains one or four different kinds of objects. and we want something that can automatically recognize which object is in each image. we start with labeled images, which just means that we tell the deep learning algorithm what the image contains. and with that information, it starts to understand the object's specific features and associate them with the corresponding category.
you'll note that the task is learned directly from the data, which also means that we don't have any influence over what features are being learned. you might hear this being referred to as end-to-end learning, but in any case, just keep in mind that deep learning learns features directly from the data.
so that's the basic workflow of deep learning. while the concept of deep learning has been around for a while, it's become much more popular in recent times due to techniques that have massively improved the accuracy of these classifiers, to the point where they outperform people in classifying images. so there are also several factors that enable deep learning, including large sets of labeled data, powerful gpus to speed up training, and the ability to use other people's work as a starting point to training your own deep neural network, which we will talk about later.
yes, we will. so right before we dive into things, we want to give you some background and framing for why we're doing this webinar. deep learning is difficult. it's cutting edge technology, and it can get complicated, whether you're dealing with network architectures, understanding how to train an accurate model, and incorporating thousands of training images.
yeah, not to mention everyone's favorite task—trying to figure out why something isn't working.
we want matlab to make deep learning easy and accessible to everyone. in this webinar, along with other resources on our website, we'll explain how you can quickly get started with deep learning using matlab. the examples in our webinar will also demonstrate how to handle large sets of images, easily integrate gpus to train deep learning models faster, understand what's happening inside a model as it's training, and build on models from experts in the field so you don't have to start from scratch. and with that, let's get into it.
yeah. let's do it. so we're going to cover three examples of deep learning—image classification using a pretrained network, transfer learning to classify new objects, and object detection in images and video. so first up is image classification using a pretrained network. so i have an image here of peppers that i want to be able to classify. and believe it or not, i can do it with matlab in four basic lines of code.
one, import a pretrained model. two, bring in the image. three, resize the image. and four, classify the image.
nice.
so that's it.
pretty cool.
all right, so moving on to the second demo—
he's kidding.
yeah, i'm kidding. so we'll talk about what's going on here.
so what's this alexnet in the first line of code? who is alex and why are we using his net?
so to directly answer your question, alexnet is a convolutional neural network designed by various people, including one alex krizhevsky. but i should probably provide some context. so there's this independent project not related to matlab that's been around for a while called the imagenet project. and its goal is to have a massive repository of visual content, like images, for people to use to do research and design in visual object recognition.
so it started in 2010. they ran an annual competition called the imagenet to large scale visual recognition challenge.
oh, yeah. the old ilsvrc.
yeah, that competition. so competitors submit software programs which compete to correctly classify and detect objects in the [inaudible]. now, up until 2012, the standard way to implement computer vision was through a process called feature engineering, as opposed to alexnet, which used and improved on methods based in deep learning. so as you can probably guess, alexnet was submitted to the 2012 ilsvrc under the team name supervision, one word. and it blew the competition out of the water, which i guess could refer to both the competitors and the competition itself.
and there's a lot of hype around it because people were realizing deep learning's not just theoretical. it's really practical and it does things way better than what we've been doing before. so history lesson aside, alexnet is trained to recognize exactly 1,000 different objects, which i'm guessing had something to do with the victory conditions of the ilsvrc 2012. it's one of several pretrained networks you can access from matlab, which also includes vgg-16 and 19.
do we have a history lesson for that?
i will not go into a history lesson for those. so let's bring it back to our four lines of code. so first check out how matlab makes it dead easy to import a pretrained model. like, it doesn't get easier than that. if you don't have alexnet on your computer, you just need to download it once, whether it's through the add-on manager or using the link in the error if you run the code without having downloaded it. and now you can use it for this demo and for anything else you want.
so in the second line, you're bringing in the image. that seems pretty straightforward. but why did you resize the image? so the first time i did this, i tried to be all clever and do it in three lines of code.
without the resizing?
yeah. and i got this error, which mentioned something about size, which means, yay, i get to figure out why it's not working.
everyone's favorite thing to do.
so if i do net dot layers, it'll show me the architecture of the network. and it looks intimidating at first, but the first layer, the input layer, has a size of 227 by 227 pixels. the x3 at the end is rgb values since this is a color photo. so seeing that, i'm like, oh, ok. just use matlab to resize the image so it doesn't error out when it passes to the network. and our final line of code can now classify the image.
so you mentioned earlier that alexnet is a convolutional neural network. what does that mean, and can i please call it cnn for short?
i mean, as long as viewers don't confuse this webinar with a certain cable news network—cable news—oh. that's what cnn stands for, doesn't it? well, in addition to cnn being a self-referential cable news network, it's a popular architecture in deep learning for image and computer vision problems. and independent of alexnet, the three main things to understand about cnns are convolution, activation, and pooling.
convolution is a mathematical operation which you might remember from whatever college course introduced you to fourier and laplace transforms, for better or for worse. the idea is we put our input images through multiple transformations and each of them extracts certain features from the image. activation applies a transformation to the output of the convolution. one popular activation function is relu, or relu, tomato tomato, which simply takes the output and maps it to the highest positive value. and finally, pulling is a process where we simplify the output by taking only one value to carry to the next layer, which helps reduce the number of parameters that the model needs to learn about.
so these three steps are repeated to form the entire cnn architecture, which can have tens or hundreds of layers, each of which learns to detect different features. so one neat thing about matlab is that it enables you to look at the feature maps. so if you compare features closer to the initial layer versus features closer to the final layer, they get more and more complex, going from colors and edges to something that seems more detailed.
let's take a look, again, at the layers of alexnet. you can see the convolutions, activations, and pooling. some other network will have a different configuration of these layers, but at the very end, they'll all have a final layer which performs the classification. with a few more lines of code, we can repeatedly display an image along with what alexnet thinks it is. sometimes it gets it, sometimes it doesn't. but it's pretty good, as long as the object was in the original set of 1,000.
which begs the question, what can you do if it's not?
well, allow me to answer that by saying, that was image classification using a pretrained model. let's move on to our second demo.
all right. in the next demo, we have video of cars driving down a highway. and we want to be able to classify these as cars, trucks, or suvs. we're going to use alexnet and fine tune the network for just our categories of objects, a process called transfer learning, which can be used to classify objects not in the original network.
and there's our answer to the previous question. quick follow-up for you. so if you had a classification task where your objects happened to be one of the 1,000, is there any reason you wouldn't just use alexnet.
good question. the main benefit to transfer learning in that case is to have a classifier specific to your data. if you train on fewer categories, you can potentially improve the accuracy.
makes sense.
so i took this video from my cell phone and i was able to automatically bring it into matlab using ip webcam. this function allowed me to record hours of video of cars traveling outside the office window. now, using matlab and computer vision, i'm able to extract the cars from each frame of video based on their motion using a relatively simple process called background subtraction.
and that's just a matter of looking at the pixel difference between two consecutive images and pulling out the stuff that's different enough.
now, as vehicles are passing by, we want to classify them as either a car, truck, or suv. and that's not what alexnet thinks we're looking at. so if our current model doesn't work on our data, we need a new model. so let's say we want to classify five different kinds of vehicles—cars, trucks, large trucks, suvs and vans. our plan is to use alexnet as a starting point and use transfer learning to create a model specific to these five categories.
so for what reason would you use transfer learning as opposed to, say, train a network from scratch?
so training from scratch is definitely something you can try. and we give you all the tools in matlab to do this. but there are a couple of very practical reasons to do transfer learning instead. for example, you don't have to set up the network architecture by yourself, which requires a lot of trial and error to find a good combination of layers. also, transfer learning doesn't require nearly as many images to build an accurate model compared to training from scratch. and finally, you can leverage knowledge and expertise from top researchers in the deep learning fields who have spent much more time training models than we have.
sounds good.
so here are five folders containing lots of images of our five categories. we want a simple way to bring in this data to pass to our deep learning algorithm. earlier, gabriel used imread as a way to bring in the image of peppers. but we don't want to have to do this for every image. instead, i'm going to use a function called image data store, which is an efficient way of bringing in data.
and we should note that there's many different kinds of data stores within matlab for different big data and data analytics tasks. so it's not just for images. if you have lots of data, data store is your friend.
so once that point image data store to my folders, it's going to automatically label all my data based on the names of the folders containing the images. so there's no need to do it one by one. once i've done that, i have access to useful functionality, like seeing how many images i have for each category, and being able to quickly split my images into a training set and a test set.
if you need to, you can also specify a custom read function. image data store as imread by default to read in all the images, which is great for standard image formats. but if you happen to have non-standard image formats that imread doesn't know how to handle, you just write your own function, pass it into image data store, and then you're good to go.
and even if you do have standard image formats, you can make a custom read function that does image preprocessing, like resizing, sharpening, or denoising. in our case, using alexnet, we need to resize them to 227 by 227. so we use this custom read function here.
so i notice that you're not doing a straight-up resize. it looks like you're padding the image. what's the reason for that?
so this was just from personal experience. i tried resizing the images and the network wasn't doing very well. and when i looked at the images myself, i couldn't tell the differences between cars and suvs. so i did something that has the same effect of cropping the image and maintaining the aspect ratio. and since that helps maintain the structural differences, i figured that might help the network. so earlier you saw that alexnet does a poor job of classifying our cars and trucks on their own. so we need to fine tune the network.
if we look at the layers, you can see the final fully connected layer representing the 1,000 categories that alexnet was trained on. to perform transfer learning, we replace the 1,000 with five for our five categories of objects. and then this line resets the classification, which means forget about those names of the 1,000 objects you learned. you only care about these five new ones.
and this is the only core change you need to make?
yep. that's all the network manipulation you need to do. if you ran this, you would get a classifier which would output one of those five objects.
so i guess the question is, how well does it do?
so we trained this network beforehand, and it actually got really good results, like 97% accuracy.
that's pretty impressive for, like, two minor changes to the code.
but let's be honest, you might not get to that point right away. remember that alexnet was trained on millions of images, including some vehicles. so it's reasonable to assume that it happened to transfer over very smoothly to our data. but if you were to transfer learn on other, very different images from the original set, maybe you might have to make some more changes.
makes sense. so what are some things people can try if they find themselves with subpar accuracy?
there's a lot of things that you can try. and we'll go into rapid fire mode. you can follow along with this slide. first of all, there are some things you can do before you even start changing parameters. check your data. i can't emphasize this enough. initially, my train model was misclassifying a lot of images. and i realized some of my data was in the wrong folders. obviously, if your setup isn't accurate, whether it's wrong folders or bad training data, you're not going to get very far.
next, try getting more data. sometimes the classifier needs more images to understand the problem better. and finally, try a different network. we're working with alexnet, but as we mentioned, there are other networks that are available to you. and it's possible that a different cnn may offer better results.
sounds good. so let's say i'm pretty sure i have my setup correct. what can i do now?
so now it's a matter of altering the network and the training process. let's start with the network. changing the network means adding, removing, or modifying layers. you could add another fully connected layer to the network which increases the non-linearity of the network and could help increase the accuracy of the network, depending on the data. you can also modify the learning weights of your new layers so that they learn faster than the earlier original layers of the network. this is useful if you want to preserve the rich features the network learned previously about its original data.
as for changing the training process, it's a matter of changing training options. you can try more stages, fewer stages, and other options, as well, for which you can find documentation on our website.
so it was fair for me to say this. all the options seem to be, like, you treat the network like a black box. if you train it and it's not very good, then you throw one of these modifications at it, tell it to start training, wait out the full waiting time, and then you find out if it actually made things better or worse. so is there anything we can do, say, in the middle of the process?
absolutely. we have a set of output functions that can show us what's happening in the network as it's training. the first one plots the accuracy of the network as it trains. ideally, you want to see the accuracy trend upward over time. and if that's not what you see, you can stop the training and try to fix it before you potentially waste hours training on something that isn't improving. you can also stop the training early, based on certain conditions. here i'm telling the network to stop if i reach an accuracy of 99.5%.
and i'm guessing that's so you don't overtrain slash overfit the network.
yep. we also have the concept of checkpoints. you can stop the network training at a specific point, see how well it does on a test set, and then if you decide it needs more training, you don't have to start from the beginning. you can just pick up the training where you left off. and as you might expect, there is documentation on our website for our many different training options. if you take a look here, you can see the options i just outlined—plot training accuracy, and here, stopping at a specified accuracy. so definitely try out these examples.
yes, please. copy paste this code. there are people out there who are like never copy, paste code you find on the internet. and i get that they mean, like, don't blindly copy stuff and expect it to just work. but seriously guys, let he who is without copy pasted internet code cast the first error message.
you should definitely copy our code. it's nice not having to write out all that code yourself and have some great starting points for better control into the training process.
so let's say that i'm really hardcore about getting my network fine tuned and i want to remove the black box aspect of the network as much as possible. so i imagine you probably can't directly see what the network sees. but how can we start getting a more intimate understanding of our network?
one thing you can do is visualize what the network is finding as features in our images. we can look at the filters and we can look at the result of an image after those filters have been applied. in the first convolution, we see we're extracting out edges, dark and light patterns. they might be very apparent, or not so much. and it all depends on how strong those features are in the image.
so you can do this with any layer of your network?
yep. let's take a look at another one. the output of the fourth convolution of this image produces something more abstract, but interesting features. you could make assumptions that this particular channel is finding the wheels and the bumper of the car as features. to test our theory, let's try out another image where the back wheel isn't visible on the left side of the image. if our assumption is correct, then the output of this channel shouldn't activate as much on the left side of the image. and that's what we're seeing.
nice. so if any of you want to debug your network, this technique gives you a visual representation of what your network sees and might help you get a better understanding of what's going on.
yes. and all the code is in the documentation. the example on the website goes through finding features in a face, but it's the same concept. we'll look at one more tool that you might find useful called deep dream. deep dream can be used to make very interesting, artsy images that you might have seen online. but it's another tool we can use to understand the network. deep dream is going to output an image representing the features it has learned throughout the training process.
so one way of understanding this is to say, instead of giving the network an image and having it connect to a class, let's go in reverse. we give the network a class and we have it give us an image. so why is this helpful.
so let's look at the documentation. neural network toolbox has a great page on deep learning. one of the concepts here is deep dream and an example of using alexnet with deep dream. we can see here i'm asking for a hen, one of the categories alexnet was trained on. and deep dream gives me in somewhat abstract version of what a hen looks like to it. and we can create deep dream images for any of the categories in our network.
so if we were to see something that doesn't look like the category, we can assume our network might not be learning our categories correctly.
yes, it might be an issue with the training data. let me give you an example. in alexnet's original 1,000 categories, it has a squirrel category. and i happen to have a bunch of pictures of squirrels so we can try them out on our network. we see all the predictions are correct, except this one. if we look at deep dream for squirrel, what do we see? and how about for hair, what it was mistaken for? there are some vibrant colors that correspond well to the first few images we tried out. you can see features associated with the tail. and these are strong features that this one image doesn't have.
and from that i guess we could add more test images that contain those types of features or lack thereof to our network.
so now you have enough to get started with deep learning, and more specifically, transfer learning. but we're not completely done with our example. remember that video we showed a while back of cars driving down the road? we tried to classify with alexnet, which is why we went through all the trouble to create our own custom model. using the same algorithm as before to detect the cars in the image, i can now classify using our model. and we can see what our model thinks they are and the competence of that prediction.
very nice.
so that was getting started with transfer learning and a lot of tips and tricks for understanding your network and making improvements. and we hope you've seen how matlab makes it easy to handle large sets of images, access models from experts in the field, visualize and debug the network, and accelerate deep learning with gpus.
wait, you totally didn't cover that last one.
ah, so you were paying attention.
yes, i was.
yeah, we didn't explicitly cover it. but if you look carefully at the training clips, the output messages indicated that we were training on a single gpu, an nvidia® 3.0 compute capable gpu, which is the minimum requirement to using a gpu for deep learning. and the beauty of gpu computing with matlab is it's all handled behind the scenes. and you, as a user, don't have to worry about it. matlab uses a gpu by default if you have one, and none of the functions change if you're using a gpu or a cluster of gpus or gpus in the cloud, or even a cpu.
can you use, like, a cpu for training? i like how you went from big, bigger, biggest, and then shrank down to bare bones computation.
yes, technically you can use a cpu. but take a look at this time lapsed video of trying to train the same deep learning algorithm on a cpu versus a gpu.
wow. that's very unimpressive.
yeah. and all of this applies to any part of the training process, whether training, testing, or visualizing a network. so if a cpu was your only option, then go for it. but we encourage you to use a gpu for training, or at least to make sure you go for a long coffee breaks while training models.
all right. so for our final demo, we'll talk about a somewhat more challenging problem that's often been brought to our attention. take a look at this image here. if we present it to our network, what will it think it is? in any case, up till now, we've only shown examples of classifying the entire image into one category. but in this image, clearly there's multiple kinds of vehicles in multiple locations. and the network we trained isn't able to tell us that.
so this classic problem is called object detection, or locating objects in the scene. so in this example, we're looking at the backside of several vehicles. and our goal is to detect them. so we need to create an object detector that recognizes the object we care about. now, how should we go about doing that?
well, the theme of this webinar has been deep learning, so how about deep learning?
fantastic. so if we're going to train a vehicle detector to recognize cars from behind, it'll need lots of images for training. now, the issue is our image data hasn't been cropped down to the individual cars, which means at first glance we'll have to go through the tedious task of cropping and labeling all of our images from scratch. how long is this webinar supposed to be?
30 minutes or less.
i don't think we can do that. unless we have matlab. yay. i'm sorry. so matlab has built-in apps to help you with this process. for one, you can quickly go through all your data and draw bounding boxes around the objects in the scene. now, even though that's better than manual cropping, you don't want to have to do that 100 or 1,000 times. so if you have a video or an image sequence, matlab can automate the process of labeling objects in the scene.
in the first frame of the video, i specify where the object is. and now matlab will track it throughout the entire video. and just like that, i have hundreds of new labeled backs of cars without having to do it 100 times. so now we have all of our images with the bounding box of the object we care about. and again, for real world and robust solutions, you'll need thousands or millions of examples of the objects. so imagine trying to do that manually without the app.
back to deep learning. we're going to use a cnn to train the object detector. we could totally import a pretrained cnn like we did before, and that'll totally work. but to show you guys something new, we're going to create a cnn architecture from scratch. so we won't type out everything in real time, but creating a cnn from scratch in matlab is just a matter of convolution, activation, and pulling layers—three things you talked about before.
and that's what we have right here in sequence. you get to decide on the number of filters to use. and since we'll make all this code available to you, feel free to use it and get your feet wet with creating your own cnn from scratch. so now it's time to train our detector. with matlab's computer vision tools, we actually have a couple of object detectors you can choose from. and what's nice is that you can use the same training data for any one of them that you choose. so as you can see from this code, you can try out all of them very simply and see how they do.
and we have documentation for these detectors, which will provide recommendations for which one to use in certain scenarios. so be sure to look at that if you plan to utilize object detection.
yeah. so we've trained our detector. and we'll try it out on a sample image. you can see the results right here. looks pretty good. but for a more impressive demo, let's try it out on a video. there it goes, as you can see, driving down the highway. and it's classifying all cars. it's pretty nifty. and for the advanced user, you have access to helper functions to get a better understanding of its performance.
here's how matlab makes it easy to do object detection from quickly labeling your data with built-in apps and training your algorithms with deep learning and other tools in computer vision. to wrap things up, keep in mind that, although we used a lot of vehicles in our example, matlab and deep learning are not limited to classifying vehicles. so whether it's people's faces, dog breeds, or a giant squirrel collection, you can do it easily with matlab.
i want to quickly call out our support for solving regression problems with deep learning, which means instead of outputting a class or category, you can output a numeric value. we have some examples of this, where you can detect lane boundaries on the road. and for those of you tired of hearing about cars, we have one where we predict facial key points, which could be used to predict a person's facial expressions.
so today we saw some of the new things you can do with matlab and deep learning. and we hope you were able to clearly see how matlab makes the daunting task of deep learning much easier. so be sure to check out all the code used in our webinar and try it out on your own data.
and if you go to the add on manager where you get our pretrained networks, you can find in the same place some other resources to get up and running with deep learning, including a video that shows how to use matlab to quickly classify objects with a webcam.
check out our other resources on our website for getting started with deep learning, and feel free to email us with any questions at .
related products
learn more
featured product
deep learning toolbox
up next:
related videos:
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)