transfer learning for training deep learning models
transfer learning is a deep learning approach in which a model that has been trained for one task is used as a starting point for a model that performs a similar task. updating and retraining a network with transfer learning is usually much faster and easier than training a network from scratch. the approach is commonly used for object detection, image recognition, and speech recognition applications, among others.
transfer learning is a popular technique because:
- it enables you to train models with less labeled data by reusing popular models that have already been trained on large datasets.
- it can reduce training time and computing resources. with transfer learning, the weights are not learned from scratch because the pretrained model has already learned the weights based on previous learnings.
- you can take advantage of model architectures developed by the deep learning research community, including popular architectures such as googlenet and resnet.
pretrained models for transfer learning
at the center of transfer learning is the pretrained deep learning model, built by deep learning researchers, that has been trained using thousands or millions of sample training images.
many pretrained models are available, and each has advantages and drawbacks to consider:
- size: what is the desired memory footprint for the model? the importance of your model’s size will vary depending on where and how you intend to deploy it. will it run on embedded hardware or a desktop? the size of the network is particularly important when deploying to a low memory system.
- accuracy: how well does the model perform prior to retraining? typically, a model that performs well for the imagenet, a commonly used dataset containing a million images and a thousand classes of images, will likely perform well on new, similar tasks as well. however, a low accuracy score on imagenet does not necessarily mean the model will perform poorly on all tasks.
- prediction speed: how fast can the model predict on new inputs? while prediction speed can vary based on other deep learning such as hardware and batch size, speed will also vary based on architecture of the chosen model, and the size of model.
you can use matlab and deep learning toolbox to access pretrained networks from the latest research with a single line of code. the toolbox also provides guidance on selecting the right network for your transfer learning project.
which model is best for your transfer learning application?
with , it’s important to keep in mind the tradeoffs involved and the overall goals of your specific project. a network with relatively low accuracy, for example, may be perfectly suitable for a new deep learning task. a good approach is to try a variety of models to find the one that fits your application best.
simple models for getting started. with simple models, such as alexnet, googlenet, vgg-16, and vgg-19, you can iterate quickly and experiment with different data preprocessing steps and training options. once you see what settings work well, you can try a more accurate network to see if that improves your results.
lightweight and computationally efficient models. squeezenet, mobilenet-v2, and shufflenet are good options when the deployment environment places limitations on model size.
you can use deep network designer to quickly evaluate various pretrained models for your project and better understand tradeoffs between different model architectures.
the transfer learning workflow
while there is great variety in transfer learning architectures and applications, .
- select a pretrained model. when getting started, it can help to select a relatively simple model. this example uses googlenet, a popular network with 22 layers deep that has been trained to classify 1000 object categories.
- replace the final layers. to retrain the network to classify a new set of images and classes, you replace the last layers of the googlenet model. the final fully connected layer is modified to contain the same number of nodes as the number of new classes, and a new classification layer which will produce an output based on the probabilities calculated by the softmax layer.
- after modifying the layers, the final fully connected layer will specify the new number of classes the network will learn, and the classification layer will determine outputs from the new output categories available. for example, googlenet was originally trained on 1000 categories, but by replacing the final layers you can retrain it to classify only the five (or any other number) categories of objects you are interested in.
- optionally freeze the weights. you can freeze the weights of earlier layers in the network by setting the learning rates in those layers to zero. during training, the parameters of frozen layers are not updated, which can significantly speed up network training. if the new data set is small, then freezing weights can also prevent overfitting of the network to the new data set.
- retrain the model. retraining will update the network to learn and identify features associated with the new images and categories. in most cases, retraining requires less data than training a model from scratch.
- predict and assess network accuracy. after the model is retrained, you can classify new images and evaluate how well the network performs.
training from scratch or transfer learning?
the two commonly used approaches for deep learning are training a model from scratch and transfer learning.
developing and training a model from scratch works better for highly specific tasks for which preexisting models cannot be used. the downside of this approach is that it typically requires a large amount of data to produce accurate results. if you’re performing text analysis, for example, and you don’t have access to a pretrained model for a text analysis but you do have access to a large number of data samples, then developing a model from scratch is likely the best approach.
transfer learning is useful for tasks such as object recognition, for which a variety of popular pretrained models exist. for example, if you need to classify images of flowers and you have a limited number of flower images, you can transfer weights and layers from an alexnet network, replace the final classification layer, and retrain your model with the images you have.
in such cases, it is possible to achieve higher model accuracy in a shorter time with transfer learning.
an interactive approach to transfer learning
using deep network designer you can – including importing a pretrained model, modifying the final layers, and retraining the network using new data – with little or no coding.
for more information, see deep learning toolbox and computer vision toolbox™.
learn more about transfer learning
see also: deep learning, convolutional neural networks, gpu coder, artificial intelligence, biomedical signal processing