Stephane Redon1, Young J. Kim2, Ming C. Lin1, Dinesh Manocha1 and Jim Templeman3 1Department of Computer
Science 2EWHA University, Korea 3Naval
Research Laboratory
Benefits of our continuous collision detection algorithm over discrete methods. The left image shows two successive configurations of the avatar revealing a fast arm motion. No collision is detected at these discrete time steps. The middle image shows the interpolating path used to detect a collision between these two configurations. The right image shows the backtracking step used to compute the time of collision and the avatar position at that time. It highlights the time interval over which there is no collision with the virtual environment. |
||
We present a fast algorithm for continuous collision detection between
a moving avatar and its surrounding virtual environment. We model the
avatar as an articulated body using line-skeletons with constant o sets
and the virtual environment as a collection of polygonized objects. Given
the position and orientation of the avatar at discrete time steps, we use
an arbitrary in-between motion to interpolate the path for each link
between discrete instances. We bound the swept-space of each link using a
swept volume (SV) and compute a bounding volume hierarchy to cull away
links that are not in close proximity to the objects in the virtual
environment. We generate the SVs of the remaining links and use them to
check for possible interferences and estimate the time of collision
between the surface of the SV and the objects in the virtual environment.
Furthermore, we use graphics hardware to perform the collision queries on
the dynamically generated swept surfaces. Our overall algorithm requires
no precomputation and is applicable to general articulated bodies. We have
implemented it on 2.4 GHz Pentium IV PC with NVIDIA GeForce FX 5800
graphics card and applied to an avatar with 16 links, moving in a virtual
environment composed of hundreds of thousands of polygons. Our prototype
system is able to detect all contacts between the moving avatar and the
environment in 10 - 30 milliseconds. Images The benchmark environment and the
avatar model used to test the performance of our algorithm
|