Adaptive Grouping and Subdivision for Simulating Hair Dynamics | ||
Kelly Ward | Ming C. Lin | |
wardk@cs.unc.edu | lin@cs.unc.edu | |
Abstract:
We present a novel approach for adaptively grouping and subdividing hair using discrete level-of-detail (LOD) representations. The set of discrete LODs include hair strands, clusters and strips. Their dynamic behavior is controlled by a base skeleton. The base skeletons are subdivided and grouped to form clustering hierarchies using a quad-tree data structure during the precomputation. At run time, our algorithm traverses the hierarchy to create continuous LODs on the fly and chooses both the appropriate discrete and continuous hair LOD representations based on the motion, the visibility, and the viewing distance of the hair from the viewer. Our collision detection for hair represented by the proposed LODs relies on a family of "swept sphere volumes" for fast and accurate intersection computations. We also use an implicit integration method to achieve simulation stability while allowing us to take large time steps. Together, these approaches for hair simulation and collision detection offer the flexibility to balance between the overall performance and visual quality of the animated hair. Furthermore, our approach is capable of modeling various styles, lengths, and motion of hair.
Publications |
[1] Adaptive Grouping and Subdivision for Simulating Hair Dynamics.
Kelly Ward and Ming C. Lin. Proc. of Pacific Graphics, 2003. (PDF) [2] Modeling Hair Using Level-of-Detail Representations. Kelly Ward, Ming C. Lin, Joohi Lee, Susan Fisher, and Dean Macri. Proc. of Computer Animation and Social Agents, 2003. Project Website (PDF) |
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Copyright 2003.