detect object collisions -凯发k8网页登录

main content

detect object collisions

you can use collision detection to model physical constraints of objects in the real world accurately, to avoid having two objects in the same place at the same time. you can use collision detection node outputs to:

  • change the state of other virtual world nodes.

  • apply matlab® algorithms to collision data.

  • drive simulink® models.

for example, you can use geometric sensors for robotics modeling. for examples of using collision detection, see and differential wheeled robot in a maze.

set up collision detection

to set up collision detection, define collision (pick) sensors that detect when they collide with targeted surrounding scene objects. the virtual world sensors resemble real-world sensors, such as ultrasonic, lidar, and touch sensors. the simulink 3d animation™ sensors are based on x3d sensors (also supported for vrml), as described in the . for descriptions of pick sensor output properties that you can access with vr source and vr sink blocks, see use collision detection data in models.

  • pointpicksensor — point clouds that detect which of the points are inside colliding geometries

  • linepicksensor — ray fans or other sets of lines that detect the distance to the colliding geometries

  • primitivepicksensor — primitive geometries (such as a cone, sphere, or box) that detect colliding geometries

to add a collision detection sensor, use these general steps. for an example that reflects this workflow, see .

  1. in the 3d world editor tree structure pane, select the children node of the transform node to which you want to add a pick sensor.

  2. to create the picking geometry to use with the sensor, add a geometry node. select nodes > add > geometry and select a geometry appropriate to the type of pick sensor (for example, point set).

  3. add a pick sensor node by selecting nodes > add > pick sensor node.

  4. in the sensor node, right-click the pickinggeometry property and select use. specify the geometry node that you created for the sensor.

  5. also in the sensor node, right-click the pickingtarget property and select use. specify the target objects for which you want the sensor to detect collisions.

    instead of specifying the picking geometry with a use, you can define the picking geometry directly. however, the directly defined geometry is invisible.

  6. optionally, change default property values or specify other values for sensor properties. for information about the intersectiontype, see sensor collisions with multiple object pick targets. for descriptions of output properties that you can access with a vr source block, see use collision detection data in models.

here is an example of the key nodes for defining a collision detection sensor for the robot in the vrcollisions virtual world:

  • the robot_body node has the line_set node as one of its children. the line_set node defines the picking geometry for the sensor.

  • the collision_sensor defines the collision detection sensor for the robot. the sensor node pickinggeometry specifies to use the line_set node as the picking geometry and the walls_obstacles node as the targets for collision detection.

sensor collisions with multiple object pick targets

to control how a pick sensor behaves when it collides with a pick target geometry that consists of multiple objects, use the intersectiontype property. possible values are:

  • geometry – the sensor collides with union of individual bounding boxes of all objects defined in the picktarget field. in general, this setting produces more exact results.

  • bounds – (default) the sensor collides with one large bounding box construed around all objects defined in the picktarget field.

in the vrcollisions example, the linepicksensor has the intersectiontype field set to geometry. this setting means that the sensor that is inside the colliding geometry (consisting of the room walls), does not collide with the union of walls. a collision takes place only if sensor rays touch any of the walls. if the intersectiontype is set to bounds, collision detection works only for a sensor that approaches the room from the outside. the whole room is wrapped into one large bounding box that interacts with the sensor.

make picking geometry transparent

you can make the picking geometry used for a pick sensor invisible in the virtual world. for the picking geometry, in its material node, set the transparency property to 1. for example, in the virtual world, for the collision_sensor picking geometry node (line_set), in the materials node, change the transparency property to 1.

avoid impending collisions

to avoid an impending collision (before the collision actually occurs), you can use the pickedrange output property for a linepicksensor. as part of the line set picking geometry, define one or more long lines that reflect your desired amount of advance notice of an impending collision. you can make those lines transparent. then create logic based on the pickedrange value.

use collision detection data in models

the isactive output property of a sensor becomes true when a collision occurs. to associate a model with the virtual reality scene, you can use a vr source block to read the sensor isactive property and the current position of the object for which the sensor is defined. you can use a vr sink block to define the behavior of the virtual world object, such as its position, rotation, or color.

for example, the vr source block in the top left of the simulink model gets data from the associated virtual world.

in the model, select the vr source block, and then in the simulink 3d animation viewer, select simulation > block parameters. this image shows some of the key selected properties.

for the linepicksensor pointpicksensor, and primitivepicksensor, you can select these output properties for a vr source block:

  • enabled – enables node operation.

    note

    the enabled property is the only property that you can select with a vr sink block.

  • isactive – indicates when the intersecting object is picked by the picking geometry.

  • pickedpoint – displays the points on the surface of the underlying pickgeometry that are picked (in local coordinate system).

  • pickedrange – indicates range readings from the picking. for details, see avoid impending collisions.

for a pointpicksensor, you can select the enabled, isactive, and pickedpoint outputs. for the primitivepicksensor, you can select the enabled and isactive outputs.

the robot control subsystem block includes the logic to change the color and position of the robot.

based on the robot control subsystem output, the vr sink block updates the virtual world to reflect the color and position of the robot.

tip

consider adjusting the sample time for blocks for additional precision for collision detection.

use collision detection in matlab

you can use collision detection in a virtual world that you define in matlab. this example is based on the vrcollisions virtual world. it does not use a simulink model.

  1. open and view the vrcollisions virtual world.

    w = vrworld('vrcollisions');
    open(w);
    fig = view(w, '-internal');
    
  2. get the collision sensor and robot nodes of the virtual world.

    col = vrnode(w,'collision_sensor')
    rob = vrnode(w,'robot')
    color = vrnode(w,'robot_color')
    
  3. move the robot, based on collision detection (when the isactive property is true). at the default position, no collision is detected.

    col.isactive
     
    for ii = 1:30
     
        % move robot
        rob.translation = rob.translation   [0.05 0 0];
        vrdrawnow
     
        % if collision is detected, change color to red.
        if col.isactive
            color.diffusecolor = [1 0 0];
        end
    end
    

use collision detection data in virtual worlds

you can use collision detection to manipulate virtual world objects, independently of a simulink model or a virtual world object in matlab.

the differential wheeled robot in a maze virtual world defines two green indexedlineset pick sensors (sensor1 and sensor2) for the purple robot (robot node).

the vrml code includes route nodes for each of the pick sensors.

the route nodes use logic defined in a script node called changecolor.

see also

blocks

  • |

related topics

    网站地图