getting started with point clouds using deep learning -凯发k8网页登录

main content

getting started with point clouds using deep learning

deep learning can automatically process point clouds for a wide range of 3-d imaging applications. point clouds typically come from 3-d scanners, such as a lidar or kinect® devices. they have applications in robot navigation and perception, depth estimation, stereo vision, surveillance, scene classification, and in advanced driver assistance systems (adas).

flowchart diagram. point cloud data in first box. preprocess (augmentation and densification) in second box, and in last box, object detection, segmentation, classification.

in general, the first steps for using point cloud data in a deep learning workflow are:

  1. import point cloud data. use a datastore to hold the large amount of data.

  2. optionally augment the data.

  3. encode the point cloud to an image-like format consistent with matlab®-based deep learning workflows.

you can apply the same deep learning approaches to classification, object detection, and semantic segmentation tasks using point cloud data as you would using regular gridded image data. however, you must first encode the unordered, irregularly gridded structure of point cloud and lidar data into a regular gridded form. for certain tasks, such as semantic segmentation, some postprocessing on the output of image-based networks is required in order to restore a point cloud structure.

import point cloud data

in order to work with point cloud data in deep learning workflows, first, read the raw data. consider using a datastore for working with and representing collections of data that are too large to fit in memory at one time. because deep learning often requires large amounts of data, datastores are an important part of the deep learning workflow in matlab. for more details about datastores, see (deep learning toolbox).

the example imports a large point cloud data set, and then configures and loads a datastore.

augment data

the accuracy and success of a deep learning model depends on large annotated datasets. using augmentation to produce larger datasets helps reduce overfitting. overfitting occurs when a classification system mistakes noise in the data for a signal. by adding additional noise, augmentation helps the model balance the data points and minimize the errors. augmentation can also add robustness to data transformations which may not be well represented in the original training data, (for example rotation, reflection, translations). and by reducing overfitting, augmentation can often lead to better results in the inference stage, which makes predictions based on what the deep learning neural network has been trained to detect.

the example setups a basic randomized data augmentation pipeline that works with point cloud data.

encode point cloud data to image-like format

to use point clouds for training with matlab-based deep learning workflows, the data must be encoded into a dense, image-like format. densification or voxelization is the process of transforming an irregular, ungridded form of point cloud data to a dense, image-like form.

the example transforms point cloud data into a dense, gridded structure.

train a deep learning classification network with encoded point cloud data

once you have encoded point cloud data into a dense form, you can use the data for an image-based classification, object detection, or semantic segmentation task using standard deep learning approaches.

the example preprocesses point cloud data into a voxelized encoding and then uses the image-like data with a simple 3-d convolutional neural network to perform object classification.

see also

| | | |

related examples

more about

网站地图