feature extraction -凯发k8网页登录

feature extraction for machine learning and deep learning

feature extraction refers to the process of transforming raw data into numerical features that can be processed while preserving the information in the original data set. it yields better results than applying machine learning directly to the raw data.

feature extraction can be accomplished manually or automatically:

  • manual feature extraction requires identifying and describing the features that are relevant for a given problem and implementing a way to extract those features. in many situations, having a good understanding of the background or domain can help make informed decisions as to which features could be useful. over decades of research, engineers and scientists have developed feature extraction methods for images, signals, and text. an example of a simple feature is the mean of a window in a signal.
  • automated feature extraction uses specialized algorithms or deep networks to extract features automatically from signals or images without the need for human intervention. this technique can be very useful when you want to move quickly from raw data to developing machine learning algorithms. wavelet scattering is an example of automated feature extraction.

with the ascent of deep learning, feature extraction has been largely replaced by the first layers of deep networks – but mostly for image data. for signal and time-series applications, feature extraction remains the first challenge that requires significant expertise before one can build effective predictive models.

feature extraction for signals and time series data

feature extraction identifies the most discriminating characteristics in signals, which a machine learning or a deep learning algorithm can more easily consume. training machine learning or deep learning directly with raw signals often yields poor results because of the high data rate and information redundancy.

schematic process for applying feature extraction to signals and time series data for a machine learning classifier.

signal features and time-frequency transformations

when analyzing signals and sensor data, signal processing toolbox™ and wavelet toolbox™ provide functions that let you measure common distinctive features of a signal in the time, frequency, and time-frequency domains. you can apply pulse and transition metrics, measure signal-to-noise ratio (snr), estimate spectral entropy and kurtosis, and compute power spectra.

time-frequency transformations, such as the short-time fourier transform (stft) can be used as signal representations for training data in machine learning and deep learning models. for example, convolutional neural networks (cnns) are commonly used on image data and can successfully learn from the 2d signal representations returned by time-frequency transformations.

spectrogram of a signal using short-time fourier transform. spectrogram shows variation of frequency content over time.

other time-frequency transformations can be used, depending on the specific application or the characteristics. for example, the constant-q transform (cqt) provides a logarithmically spaced frequency distribution; the continuous wavelet transform (cwt) is usually effective at identifying short transients in non-stationary signals.

features for audio applications and predictive maintenance

audio toolbox™ provides a collection of time-frequency transformations including mel spectrograms, octave and gammatone filter banks, and discrete cosine transform (dct), that are often used for audio, speech, and acoustics. other popular feature extraction methods for these types of signals include mel frequency cepstral coefficients (mfcc), gammatone cepstral coefficients (gtcc), pitch, harmonicity, and different types of audio spectral descriptors. the audio feature extractor tool can help select and extract different audio features from the same source signal while reusing any intermediate computations for efficiency.

for engineers developing applications for condition monitoring and predictive maintenance, the diagnostic feature designer app in predictive maintenance toolbox™ lets you extract, visualize, and rank features to design condition indicators for monitoring machine health.

diagnostic feature designer app lets you design and compare features to discriminate between nominal and faulty systems.

automated feature extraction methods

automated feature extraction is a part of the complete automl workflow that delivers optimized models. the workflow involves three simple steps that automate feature selection, model selection, and hyperparameter tuning.

new high-level methods have emerged to automatically extract features from signals. autoencoders, wavelet scattering, and deep neural networks are commonly used to extract features and reduce dimensionality of the data.

wavelet scattering networks automate the extraction of low-variance features from real-valued time series and image data. this approach produces data representations that minimize differences within a class while preserving discriminability across classes. wavelet scattering works well when you do not have a lot of data to begin with.

feature extraction for image data

feature extraction for image data represents the interesting parts of an image as a compact feature vector. in the past, this was accomplished with specialized feature detection, feature extraction, and feature matching algorithms. today, deep learning is prevalent in image and video analysis, and has become known for its ability to take raw image data as input, skipping the feature extraction step. regardless of which approach you take, computer vision applications such as image registration, object detection and classification, and content-based image retrieval, all require effective representation of image features – either implicitly by the first layers of a deep network, or explicitly applying some of the longstanding image feature extraction techniques.

detecting an object (left) in a cluttered scene (right) using a combination of feature detection, feature extraction, and matching.

feature extraction techniques provided by computer vision toolbox™ and image processing toolbox™ include:

  • histogram of oriented gradients (hog)
  • speeded-up robust features (surf)
  • local binary pattern (lbp) features

histogram of oriented gradients (hog) feature extraction of image (top). feature vectors of different sizes are created to represent the image by varying cell size (bottom). see example for details.


examples and how to

  • - video
  • - example
  • - example
  • - example
  • - example
  • - example

software reference

see also: , object detection, , image processing and computer vision, , object detection, object recognition, digital image processing, optical flow, ransac, pattern recognition, point cloud, deep learning, feature selection, computer vision, automl

machine learning training course

in this course, you’ll determine how to use unsupervised learning techniques to discover features in large data sets and supervised learning techniques to build predictive models.

网站地图