3-凯发k8网页登录
this example shows how to combine multiple point clouds to reconstruct a 3-d scene using iterative closest point (icp) algorithm.
overview
this example stitches together a collection of point clouds that was captured with kinect to construct a larger 3-d view of the scene. the example applies icp to two successive point clouds. this type of reconstruction can be used to develop 3-d models of objects or build 3-d world maps for simultaneous localization and mapping (slam).
register two point clouds
datafile = fullfile(toolboxdir('vision'),'visiondata','livingroom.mat'); load(datafile); % extract two consecutive point clouds and use the first point cloud as % reference. ptcloudref = livingroomdata{1}; ptcloudcurrent = livingroomdata{2};
the quality of registration depends on data noise and initial settings of the icp algorithm. you can apply preprocessing steps to filter the noise or set initial property values appropriate for your data. here, preprocess the data by downsampling with a box grid filter and set the size of grid filter to be 10cm. the grid filter divides the point cloud space into cubes. points within each cube are combined into a single output point by averaging their x,y,z coordinates.
gridsize = 0.1; fixed = pcdownsample(ptcloudref,'gridaverage',gridsize); moving = pcdownsample(ptcloudcurrent,'gridaverage',gridsize); % note that the downsampling step does not only speed up the registration, % but can also improve the accuracy.
to align the two point clouds, we use the icp algorithm to estimate the 3-d rigid transformation on the downsampled data. we use the first point cloud as the reference and then apply the estimated transformation to the original second point cloud. we need to merge the scene point cloud with the aligned point cloud to process the overlapped points.
begin by finding the rigid transformation for aligning the second point cloud with the first point cloud. use it to transform the second point cloud to the reference coordinate system defined by the first point cloud.
tform = pcregistericp(moving,fixed,'metric','pointtoplane','extrapolate',true); ptcloudaligned = pctransform(ptcloudcurrent,tform);
we can now create the world scene with the registered data. the overlapped region is filtered using a 1.5cm box grid filter. increase the merge size to reduce the storage requirement of the resulting scene point cloud, and decrease the merge size to increase the scene resolution.
mergesize = 0.015; ptcloudscene = pcmerge(ptcloudref,ptcloudaligned,mergesize); % visualize the input images. figure subplot(2,2,1) imshow(ptcloudref.color) title('first input image','color','w') drawnow subplot(2,2,3) imshow(ptcloudcurrent.color) title('second input image','color','w') drawnow % visualize the world scene. subplot(2,2,[2,4]) pcshow(ptcloudscene,'verticalaxis','y','verticalaxisdir','down') title('initial world scene') xlabel('x (m)') ylabel('y (m)') zlabel('z (m)')
drawnow
stitch a sequence of point clouds
to compose a larger 3-d scene, repeat the same procedure as above to process a sequence of point clouds. use the first point cloud to establish the reference coordinate system. transform each point cloud to the reference coordinate system. this transformation is a multiplication of pairwise transformations.
% store the transformation object that accumulates the transformation. accumtform = tform; figure haxes = pcshow(ptcloudscene,'verticalaxis','y','verticalaxisdir','down'); title('updated world scene') % set the axes property for faster rendering haxes.cameraviewanglemode = 'auto'; hscatter = haxes.children; for i = 3:length(livingroomdata) ptcloudcurrent = livingroomdata{i}; % use previous moving point cloud as reference. fixed = moving; moving = pcdownsample(ptcloudcurrent,'gridaverage',gridsize); % apply icp registration. tform = pcregistericp(moving,fixed,'metric','pointtoplane','extrapolate',true); % transform the current point cloud to the reference coordinate system % defined by the first point cloud. accumtform = rigidtform3d(accumtform.a * tform.a); ptcloudaligned = pctransform(ptcloudcurrent,accumtform); % update the world scene. ptcloudscene = pcmerge(ptcloudscene,ptcloudaligned,mergesize); % visualize the world scene. hscatter.xdata = ptcloudscene.location(:,1); hscatter.ydata = ptcloudscene.location(:,2); hscatter.zdata = ptcloudscene.location(:,3); hscatter.cdata = ptcloudscene.color; drawnow('limitrate') end
% during the recording, the kinect was pointing downward. to visualize the % result more easily, let's transform the data so that the ground plane is % parallel to the x-z plane. angle = -10; translation = [0, 0, 0]; tform = rigidtform3d([angle, 0, 0],translation); ptcloudscene = pctransform(ptcloudscene,tform); pcshow(ptcloudscene,'verticalaxis','y','verticalaxisdir','down', ... 'parent',haxes) title('updated world scene') xlabel('x (m)') ylabel('y (m)') zlabel('z (m)')