differential wheeled robot with lidar sensor -凯发k8网页登录
the vrcollisions_lidar example shows how a linepicksensor can be used to model lidar sensor behavior in simulink® 3d animation™.
in a simple virtual world, a wheeled robot with a lidar sensor mounted on its top is defined. this lidar sensor is implemented using the linepicksensor that detects collisions of several rays (modeled as indexedlineset) with surrounding scene objects. sensor pickedrange and pickedpoint fields are used in this model for visualization purposes only, but together with robot pose information they can be used for simultaneous localization and mapping (slam) and other similar purposes.
the sensor sensing lines are visible, shown as transparent green lines. there are 51 sensing rays evenly spaced in the horizontal plane between -90 and 90 degrees. lidar range is 10 meters.
in order to visualize the lidar sensor output, there is a visualization proxy lineset defined with lines identical to lines defined as the linepicksensor sensing geometry. visualization lines are blue. combination of pickedpoint and pickedrange linepicksensor outputs is used to visualize points of collision. the pickedpoint output contains coordinates of points that collided with surrounding objects. this output has variable size depending on how many sensor rays collided. the pickedrange output size is fixed, equal to the number of sensing rays. the output returns distance from lidar sensor origin to collision point for each sensing line. for rays that don't collide, this output returns -1. the pickedrange is used to determine the indices of lines for which the collision points are returned in the pickedpoint sensor output. in effect, the blue lines are shortened so that only the line segment between the ray fan origin and point of collision is displayed for each line.
robot trajectory is modeled in a trivial way using the signal editor and the ramp blocks. in the signal editor, a simple 1x1 meter square trajectory is defined for the first 40 seconds of simulation. after returning to its original position, the robot only rotates indefinitely.
in the model, there are both vr sink and vr source blocks defined, associated with the same virtual world. the vr source is used to read the sensor signals. the vr sink is used to set the robot position / rotation and the coordinates of endpoints of the sensor visual proxy lines.
in the virtual world, there are several viewpoints defined, both static and attached to the robot, allowing to observe lidar visualization from different perspectives.