Click here for full text:
Extended Specifications and Test Data Sets for Data Level Comparisons of Direct Volume Rendering Algorithms
Kim, Kwansik; Wittenbrink, Craig M.; Pang, Alex
HPL-2000-40R1
Keyword(s): metrics; opacity; gradient; surface classification; volume visualization; image quality; uncertainty visualization
Abstract: Direct volume rendering (DVR) algorithms do not generate intermediate geometry to create a visualization. Yet, they produce countless variations in the resulting images. Therefore, comparative studies are essential for objective interpretation. Even though image and data level comparison metrics are available, it is still difficult to compare results because of the numerous rendering parameters and algorithm specifications involved. Most of the previous comparison methods use information from final rendered images only. We overcome limitations of image level comparisons with our data level approach using intermediate rendering information. We provide a list of rendering parameters and algorithm specifications to guide comparison studies. We extend Williams and Uselton's rendering parameter list with algorithm specification items and provide guidance on how to compare algorithms. Real data are often too complex to study algorithm variations with confidence. Most of the analytic test data sets reported are often useful only for a limited feature of DVR algorithms. We provide simple and easily reproducible test data sets, a checkerboard and a ramp, that can make clear differences in a wide range of algorithm variations. With data level metrics, our test data sets make it possible to perform detailed comparison studies. A number of examples illustrate how to use these tools. Notes: To be published in the IEEE Transactions on Visualization and Computer Graphics
39 Pages
Back to Index
|