preprocess data for domain-specific deep learning applications
data preprocessing is used for training, validation, and inference. preprocessing consists of a series of deterministic operations that normalize or enhance desired data features. for example, you can normalize data to a fixed range or rescale data to the size required by the network input layer.
preprocessing can occur at two stages in the deep learning workflow.
commonly, preprocessing occurs as a separate step that you complete before preparing the data to be fed to the network. you load your original data, apply the preprocessing operations, then save the result to disk. the advantage of this approach is that the preprocessing overhead is only required once, then the preprocessed images are readily available as a starting place for all future trials of training a network.
if you load your data into a datastore, then you can also apply preprocessing during training by using the and functions. for more information, see . the transformed images are not stored in memory. this approach is convenient to avoid writing a second copy of training data to disk if your preprocessing operations are not computationally expensive and do not noticeably impact the speed of training the network.
data augmentation consists of randomized operations that are applied to the training data while the network is training. augmentation increases the effective amount of training data and helps to make the network invariant to common distortion in the data. for example, you can add artificial noise to training data so that the network is invariant to noise.
to augment training data, start by loading your data into a datastore. for more information, see . some built-in datastores apply a specific and limited set of augmentation to data for specific applications. you can also apply your own set of augmentation operations on data in the datastore by using the and functions. during training, the datastore randomly perturbs the training data for each epoch, so that each epoch uses a slightly different data set.
image processing applications
augment image data to simulate variations in the image acquisition. for example, the most common type of image augmentation operations are geometric transformations such as rotation and translation, which simulate variations in the camera orientation with respect to the scene. color jitter simulates variations of lighting conditions and color in the scene. artificial noise simulates distortions caused by the electrical fluctuations in the sensor and analog-to-digital conversion errors. blur simulates an out-of-focus lens or movement of the camera with respect to the scene.
common image preprocessing operations include noise removal, edge-preserving smoothing, color space conversion, contrast enhancement, and morphology.
if you have image processing toolbox™, then you can process data using these operations as well as any other functionality in the toolbox. for an example that shows how to create and apply these transformations, see .
processing type | description | sample functions | sample output |
---|---|---|---|
resize images | resize images by a fixed scaling factor or to a target size |
|
|
warp images | apply random reflection, rotation, scale, shear, and translation to images |
|
|
crop images | crop an image to a target size from the center or a random position |
|
|
jitter color | randomly adjust image hue, saturation, brightness, or contrast |
|
|
simulate noise | add random gaussian, poisson, salt and pepper, or multiplicative noise |
|
|
simulate blur | add gaussian or directional motion blur |
|
|
object detection
object detection data consists of an image and bounding boxes that describe the location and characteristics of objects in the image.
if you have computer vision toolbox™, then you can use the image labeler (computer vision toolbox) and the video labeler (computer vision toolbox) apps to interactively label rois and export the label data for training a neural network. if you have automated driving toolbox™, then you also use the ground truth labeler (automated driving toolbox) app to create labeled ground truth training data.
when you transform an image, you must perform an identical transformation to the corresponding bounding boxes. if you have computer vision toolbox, then you can process bounding box data using the operations in the table. for an example that shows how to create and apply these transformations, see . for more information, see (computer vision toolbox).
processing type | description | sample functions | sample output |
---|---|---|---|
resize bounding boxes | resize bounding boxes by a fixed scaling factor or to a target size |
|
|
crop bounding boxes | crop a bounding box to a target size from the center or a random position |
|
|
warp bounding boxes | apply reflection, rotation, scale, shear, and translation to bounding boxes |
|
|
semantic segmentation
semantic segmentation data consists of images and corresponding pixel labels represented as categorical arrays.
if you have computer vision toolbox, then you can use the image labeler (computer vision toolbox) and the video labeler (computer vision toolbox) apps to interactively label pixels and export the label data for training a neural network. if you have automated driving toolbox, then you also use the ground truth labeler (automated driving toolbox) app to create labeled ground truth training data.
when you transform an image, you must perform an identical transformation to the corresponding pixel labeled image. if you have image processing toolbox, then you can preprocess pixel label images using the functions in the table and any other toolbox function that supports categorical input. for an example that shows how to create and apply these transformations, see . for more information, see (computer vision toolbox).
processing type | description | sample functions | sample output |
---|---|---|---|
resize pixel labels | resize pixel label images by a fixed scaling factor or to a target size |
|
|
crop pixel labels | crop a pixel label image to a target size from the center or a random position |
|
|
warp pixel labels | apply random reflection, rotation, scale, shear, and translation to pixel label images |
|
|
lidar processing applications
lidar toolbox™ enables you to design, analyze, and test lidar systems. you can perform object detection and tracking, semantic segmentation, shape fitting, and registration. raw point cloud data from lidar sensors requires basic processing before you can use them for these advanced workflows.
lidar toolbox provides tools to perform preprocessing such as downsampling, filtering, aligning, and extracting features from point cloud data. you can also augment and transform point clouds to increase the diversity of your training data.
use lidar viewer (lidar toolbox) app to visualize, analyze and measure point cloud data. you can preprocess data by using the built-in preprocessing algorithms or import a custom algorithm. for more information, see (lidar toolbox).
you can create labeled ground truth training data by using the (lidar toolbox) app. for more information on automated labelling, see (lidar toolbox).
processing type | description | sample functions | sample output |
---|---|---|---|
clean and filter point cloud data |
|
|
|
organize point cloud | convert point cloud into organized format, where you arrange the data as rows and columns according to the spatial relationship between the points |
|
size(ptcloudunorg.location) ans = 1×2 37879 3 ptcloudorg = pcorganize(ptcloudunorg,params); size(ptcloudorg.location) ans = 1×3 64 1024 3 |
create blocked point clouds | when your data is too large to fit into the memory, divide and process the point cloud as discrete blocks |
|
|
augment point cloud data |
|
|
|
signal processing applications
signal processing toolbox™ enables you to denoise, smooth, detrend, and resample signals. you can augment training data with noise, multipath fading, and synthetic signals such as pulses and chirps. you can also create labeled sets of signals by using the (signal processing toolbox) app and the (signal processing toolbox) object. for an example that shows how to create and apply these transformations, see waveform segmentation using deep learning.
wavelet toolbox™ and signal processing toolbox enable you to generate 2-d time-frequency representations of time series data that you can use as image inputs for signal classification applications. for an example, see . similarly, you can extract sequences from signal data to use as input for lstm networks. for an example, see (signal processing toolbox).
communications toolbox™ expands on signal processing functionality to enable you to perform error correction, interleaving, modulation, filtering, synchronization, and equalization of communication systems. for an example that shows how to create and apply these transformations, see .
you can process signal data using the functions in the table as well as any other functionality in each toolbox.
processing type | description | sample functions | sample output |
---|---|---|---|
clean signals |
|
|
|
filter signals |
|
|
|
augment signals |
|
|
|
create time-frequency representations | create spectrograms, scalograms, and other 2-d representations of 1-d signals |
|
|
extract features from signals | estimate instantaneous frequency and spectral entropy |
|
|
audio processing applications
audio toolbox™ provides tools for audio processing, speech analysis, and acoustic measurement. use these tools to extract auditory features and transform audio signals. augment audio data with randomized or deterministic time scaling, time stretching, and pitch shifting. you can also create labeled ground truth training data by using the (signal processing toolbox) app. you can process audio data using the functions in this table as well as any other functionality in the toolbox. for an example that shows how to create and apply these transformations, see augment audio dataset (audio toolbox).
audio toolbox also provides matlab® and simulink® support for pretrained audio deep learning networks. locate and classify sounds with yamnet and estimate pitch with crepe. extract vggish or openl3 feature embeddings to input to machine learning and deep learning systems. the audio toolbox pretrained networks are available in deep network designer. for a yamnet example, see .
processing type | description | sample functions | sample output |
---|---|---|---|
augment audio data | perform random or deterministic pitch shifting, time-scale modification, time shifting, noise addition, and volume control |
|
|
extract audio features | extract spectral parameters from audio segments |
|
processed output: ans = struct with fields: mfcc: [1 2 3 4 5 6 7 8 9 10 11 12 13] mfccdelta: [14 15 16 17 18 19 20 21 22 23 24 25 26] mfccdeltadelta: [27 28 29 30 31 32 33 34 35 36 37 38 39] spectralcentroid: 40 pitch: 41 |
create time-frequency representations |
|
|
|
text analytics
text analytics toolbox™ includes tools for processing raw text from sources such as equipment logs, news feeds, surveys, operator reports, and social media. use these tools to extract text from popular file formats, preprocess raw text, extract individual words or multiword phrases (n-grams), convert text into numerical representations, and build statistical models. you can process text data using the functions in this table as well as any other functionality in the toolbox. for an example showing how to get started, see prepare text data for analysis (text analytics toolbox).
processing type | description | sample functions | sample output |
---|---|---|---|
tokenize text | parse text into words and punctuation |
| original:
processed output:
|
clean text |
|
| processed output:
|
see also
| | | |