| Title | Applications of spline manifolds to problems in modeling, rendering, and analysis |
| Publication Type | dissertation |
| School or College | College of Engineering |
| Department | Computing |
| Author | Martin, William Michael |
| Date | 2013-05 |
| Description | While boundary representations, such as nonuniform rational B-spline (NURBS) surfaces, have traditionally well served the needs of the modeling community, they have not seen widespread adoption among the wider engineering discipline. There is a common perception that NURBS are slow to evaluate and complex to implement. Whereas computer-aided design commonly deals with surfaces, the engineering community must deal with materials that have thickness. Traditional visualization techniques have avoided NURBS, and there has been little cross-talk between the rich spline approximation community and the larger engineering field. Recently there has been a strong desire to marry the modeling and analysis phases of the iterative design cycle, be it in car design, turbulent flow simulation around an airfoil, or lighting design. Research has demonstrated that employing a single representation throughout the cycle has key advantages. Furthermore, novel manufacturing techniques employing heterogeneous materials require the introduction of volumetric modeling representations. There is little question that fields such as scientific visualization and mechanical engineering could benefit from the powerful approximation properties of splines. In this dissertation, we remove several hurdles to the application of NURBS to problems in engineering and demonstrate how their unique properties can be leveraged to solve problems of interest. |
| Type | Text |
| Publisher | University of Utah |
| Subject | computer-aided design; heterogeneous design; high-dimensional splines; radiance; spline approximation; volumetric splines |
| Dissertation Name | Doctor of Philosophy |
| Language | eng |
| Rights Management | © William Michael Martin |
| Format | application/pdf |
| Format Medium | application/pdf |
| Format Extent | 3,522,292 bytes |
| ARK | ark:/87278/s6pn9mhv |
| Setname | ir_etd |
| ID | 195915 |
| OCR Text | Show APPLICATIONS OF SPLINE MANIFOLDS TO PROBLEMS IN MODELING, RENDERING, AND ANALYSIS by William Michael Martin A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah May 2013 Copyright © William Michael Martin 2013 All Rights Reserved THE UNIVERSITY OF UTAH GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL The dissertation of William Michael Martin has been approved by the following supervisory committee members: Elaine Cohen , Chair September 24, 2012 Date Approved Gershon Elber , Member September 24, 2012 Date Approved Richard F. Riesenfeld , Member September 24, 2012 Date Approved Peter Shirley , Member September 24, 2012 Date Approved Ross Whitaker , Member September 24, 2012 Date Approved and by Alan Davis , Chair of The School of Computing and by Donna M. White, Interim Dean of the Graduate School. ABSTRACT While boundary representations, such as nonuniform rational B-spline (NURBS) surfaces, have traditionally well served the needs of the modeling community, they have not seen widespread adoption among the wider engineering discipline. There is a common perception that NURBS are slow to evaluate and complex to implement. Whereas computer-aided design commonly deals with surfaces, the engineering community must deal with materials that have thickness. Traditional visualization techniques have avoided NURBS, and there has been little cross-talk between the rich spline approximation community and the larger engineering field. Recently there has been a strong desire to marry the modeling and analysis phases of the iterative design cycle, be it in car design, turbulent flow simulation around an airfoil, or lighting design. Research has demonstrated that employing a single representation throughout the cycle has key advantages. Furthermore, novel manufacturing techniques employing heterogeneous materials require the introduction of volumetric modeling representations. There is little question that fields such as scientific visualization and mechanical engineering could benefit from the powerful approximation properties of splines. In this dissertation, we remove several hurdles to the application of NURBS to problems in engineering and demonstrate how their unique properties can be leveraged to solve problems of interest. To my parents Michael and Julia and my beloved Graham. CONTENTS ABSTRACT.................................................................................................................................... iii LIST OF F IG U R E S ....................................................................................................................... viii LIST OF TA B L E S ......................................................................................................................... xiv ACKNOWLEDGMENTS............................................................................................................. xv CHAPTERS 1. INTRODUCTION AND A IM S .......................................................................................... 1 1.1 Organization.................................................................................................................... 2 2. BACKGROUND.................................................................................................................... 4 2.1 Motivating Applications ................................................................................................. 4 2.1.1 Scientific and Engineering Analysis ................................................................... 4 2.1.2 Novel Materials ..................................................................................................... 6 2.1.2.1 Functionally Gradient Material (FGM) Manufacturing......................... 6 2.1.2.2 O p tic s ............................................................................................................ 7 2.1.3 Medical Visualization............................................................................................ 9 2.1.4 Realistic Rendering................................................................................................. 9 2.2 Related Work ................................................................................................................... 12 2.2.1 Ray Tracing NURBS Surfaces/Volumes ............................................................ 12 2.2.2 Volumetric Representations and Techniques ..................................................... 14 2.2.3 Volume Visualization ............................................................................................ 17 2.2.4 Medial Axis Transforms........................................................................................ 18 2.2.5 Surface / Volume Completion.............................................................................. 20 2.2.6 Global Illumination................................................................................................. 21 3. RAY TRACING TRIMMED NURBS SURFACES........................................................ 24 3.1 Overview ............................................................................................................................ 24 3.2 Introduction....................................................................................................................... 24 3.3 Ray Tracing NURBS........................................................................................................ 25 3.3.1 Flattening................................................................................................................. 28 3.3.2 Bounding Volume Hierarchy................................................................................. 33 3.3.3 Root Finding............................................................................................................ 35 3.3.4 Evaluation ............................................................................................................... 38 3.3.5 Partial Refinement ................................................................................................. 41 3.4 Trimming Curves ............................................................................................................ 44 3.4.1 Building a Hierarchy............................................................................................... 45 3.4.2 Ray Tracing Trimmed NURBS ............................................................................ 47 3.5 Results ............................................................................................................................... 47 4. REPRESENTATION OF VOLUMETRIC DATA USING TRIVARIATE SPLINES 52 4.1 Overview........................................................................................................................... 52 4.2 Introduction....................................................................................................................... 52 4.3 Background....................................................................................................................... 54 4.3.1 Trivariate Volumes....................................................................................................54 4.3.2 Aggregate Data Types ............................................................................................ ...55 4.4 Modeling and Data Fitting ..................................................................................................55 4.5 Evaluation .......................................................................................................................... ...59 4.6 Visualization Techniques ....................................................................................................63 4.6.1 Isosurfacing and Slicing ...........................................................................................63 4.6.2 Direct Volume Rendering ..................................................................................... ...65 4.6.3 Optical Path Tracing .............................................................................................. ...71 4.6.4 Summary of Ray Tracing ..................................................................................... ...72 4.7 Conclusion and Future Work .............................................................................................73 5. SURFACE COMPLETION FROM AN IRREGULAR BOUNDARY CURVE . . . 75 5.1 Overview........................................................................................................................... 75 5.2 Introduction....................................................................................................................... 75 5.3 The Concentric Parameterization ................................................................................... 78 5.3.1 Polygonal Boundary .............................................................................................. 78 5.3.2 Generalization to Higher Order Curves .............................................................. 80 5.4 Conclusions....................................................................................................................... 84 6. VOLUME COMPLETION FROM A BOUNDARY REPRESENTATION............ 86 6.1 Overview........................................................................................................................... 86 6.2 Introduction....................................................................................................................... 86 6.3 Potential-Based Skeletonization..................................................................................... 91 6.3.1 Potential Field Formulation................................................................................... ...92 6.3.2 Properties of the Potential Skeleton..................................................................... ...93 6.3.3 Improving Evaluation Sp eed ....................................................................................96 6.3.4 Determination of the Skeleton.............................................................................. ...97 6.3.4.1 Determination of the s in k s .......................................................................... 98 6.3.4.2 Locating the saddles ..................................................................................... 98 6.3.4.3 Tracing the saddle-sink paths ..................................................................... 99 6.3.5 Saddle-sink Connectivity Graph .......................................................................... 99 6.3.6 Application to Curves/Surfaces............................................................................ 101 6.4 Surface Completion of Planar Curves............................................................................ 101 6.5 Surface Completion of Space Curves............................................................................ 103 6.6 Decoupling the Skeletonization Operator from the D riv e r.................................................................................................................103 6.7 Harmonic A n a ly sis..........................................................................................................103 6.8 Volumetric Completion of a B-Spline Boundary Representation ................................................................................................................... 108 6.9 Conditions for Success ................................................................................................... 115 6.10 Results ...............................................................................................................................117 6.11 Parametric Stretch............................................................................................................ 117 6.12 Conclusions and Future W o rk ........................................................................................119 vi 7. SPLINES FOR GLOBAL ILLUMINATION................................................................. 120 7.1 Overview........................................................................................................................... 120 7.2 Our Approach................................................................................................................... 120 7.3 Summary of Contributions...............................................................................................121 7.4 Mathematical Problem Formulation.............................................................................. 122 7.5 Radiance Integrals............................................................................................................ 123 7.6 A Review of B-Spline Approximation..........................................................................125 7.6.1 B-Spline Approximation........................................................................................126 7.6.2 Tensor Product B-spline Functions..................................................................... 126 7.6.3 Evaluation of Tensor Product B-Spline Functions............................................127 7.7 Surface Radiance Textures ............................................................................................... 128 7.8 Lambertian Surfaces ........................................................................................................ 128 7.8.1 Nonuniform Domains; Trims .............................................................................. 129 7.9 View-Dependent Radiance.............................................................................................. 129 7.9.1 Mapping to the S p h e re ..........................................................................................129 7.10 Approximation .................................................................................................................131 7.11 Global Illumination..........................................................................................................132 7.12 A Gathering Algorithm................................................................................................... 134 7.13 Transmission.....................................................................................................................135 7.14 Hierarchical Radiance Textures..................................................................................... 135 7.15 Implementation.................................................................................................................140 7.16 Results ...............................................................................................................................141 7.17 Representation and Global Illumination....................................................................... 144 7.18 Performance.................................................................................................................... 144 7.19 Conclusion and Future W o rk ..........................................................................................146 8. CONCLUSION...................................................................................................................... 147 APPENDICES A. PA PER S .................................................................................................................................. 148 B. SINGULAR JACOBIANS AND THE RAY INTERSECTION FUNCTION.......... 150 C. DERIVATION OF POTENTIAL EQUATIONS IN THE PLANE ........................... 152 REFERENCES .............................................................................................................................. 162 vii LIST OF FIGURES 2.1 Diagram summarizing the three-dimensional printing technology (courtesy of MIT 3D Printing Lab - h t t p : / /w e b .m i t . e d u / t d p /w w w /w h a t i s 3 d p . h tm l ). . 7 TM 2.2 Examples of commercially available Gradium GRIN lenses (courtesy of Light- Path Technologies - h t t p : / /www. l i g h t p a t h . com)............................................ 8 2.3 Light travels along a curved path in a medium with graded refractive index. This property is used in multi-mode optical fiber to reduce 'modal dispersion.' Image courtesy of Marc Achermann, Lucerne University of Applied Sciences and Arts, 6048 Horw, Switzerland....................................................................................................... 8 2.4 An MRI machine................................................................................................................... 9 2.5 Visualization of CT data from the National Library of Medicine's Visible Human Project® (image courtesy of Steven Parker, University of Utah).................................. 10 2.6 Medial axis of a planar figure.............................................................................................. 19 3.1 The basic method we use to find the intersection of a ray and a parametric object shown in a 2D example. Left: The ray is tested against a series of axis-aligned bounding boxes. Right: For each box hit, an initial guess is generated in the parametric interval the box bounds. Root-finding is then iteratively applied until a convergence or a divergence criterion is met.................................................................... 25 3.2 A ray formulated as the intersection of two planes.......................................................... 27 3.3 Illustration of the chord height tolerance heuristic........................................................... 29 3.4 Convergence of the bounding volume hierarchy under 4-to-1 subdivision. The top figures show the parametric intervals. The bottom figures show the spheres bounding the corresponding control meshes..................................................................... 34 3.5 Left: Failure to adjust tolerances may result in surface acne. Right: Properly adjusted tolerances solve the problem. Black regions result from truncated ray depth. 38 3.6 Original mesh and refined mesh which results from Bezier subdivision...................... 42 3.7 Graph showing how the factor y ^',o propagates through the recurrence....................... 43 3.8 Invalid trimming curves: a curve which is not closed, curves which cross, and curves with conflicting orientation..................................................................................... 45 3.9 A set of trimming curves and the resulting hierarchy. ................................................... 45 3.10 Algorithm for adding a trimming curve to the trimming hierarchy............................... 46 3.11 Algorithm to determine whether a closed curve in paramteric space is trimmed away by the trimming hierarchy.......................................................................................... 46 3.12 Algorithm for determining whether a ray-surface intersection should be trimmed (reported as a miss)............................................................................................................... 47 3.13 A scene containing NURBS primitives. All of the objects on the table are spline models which have been ray traced using the method presented in this chapter......... 48 3.14 Ray traced Bezier surfaces from the interactive ray tracing paper by Parker et al. [1]. Reprinted with permission. © Copyright 1999 by ACM, Inc. Definitive version: h t t p : / / d o i . acm. o r g /1 0 .1 1 4 5 /3 0 0 5 2 3 .3 0 0 5 3 7 ......................... 49 3.15 Teapot scene from different viewpoints............................................................................. 49 3.16 Rendering of a NURBS scene featuring left) a metallic goblet, center) a bump-mapped glass, and right) a scratched metallic tabletop.................................................... 49 3.17 Mechanical parts produced by the Alpha_l [2] modeling system (crank, CranklA, and allblade).......................................................................................................................... 50 3.18 A commercial headlight rendered using our system........................................................ 50 4.1 Building a trivariate using simple modeling operators. We implemented these modeling operators in Alpha_l [2] as a straightforward extension of the surface-based operators. This is possible because of the tensor-product nature of the primitives............................................................................................................................... 56 4.2 A swept cylindrical solid build up from primitives.............................................................57 4.3 Design example using aggregate objects........................................................................... ...58 4.4 Graph showing how the factor 7^/ 0 propagates through the recurrence..........................63 4.5 Planar cut..................................................................................................................................64 4.6 Visualization of subdivision-based isosurface extraction...................................................66 4.7 Visualization of subdivision-based planar cut extraction................................................ ...66 4.8 Direct volume rendering (left) compared with optical path tracing (right).................. ...67 4.9 A bounding cone................................................................................................................... ...68 4.10 A sequence of progressively sharper isosurfaces extracted using direct volume rendering................................................................................................................................ 70 4.11 Snell's law............................................................................................................................. 71 4.12 Lens with constant index of refraction (left) and varying index of refraction (right). 72 4.13 Algorithm for tracing rays through a trivariate volume................................................... 73 4.14 Demonstration of boundary singularities. A rectangle is revolved about a vertical axis to create a solid (left). The resulting volume (right) has two coincident internal faces (shown in gray) and one singular (one-dimensional) parametric boundary along the axis......................................................................................................................... 74 5.1 Motivating examples for our work..................................................................................... 76 5.2 Moving curves and a problem with o ffs e ts ..................................................................... 77 5.3 The inspiration for our parameterization.......................................................................... 77 ix 5.4 Weaknesses in a direct medial mapping............................................................................ .. 78 5.5 Nomenclature review; regions formed by the medial axis.............................................. .. 78 5.6 Parameterization of the rectangle which results from our mapping.............................. .. 80 5.7 Basic concentric completion algorithm................................................................................ 81 5.8 The concentric completion algorithm, illustrated............................................................... 82 5.8 (continued).............................................................................................................................. 83 5.9 An example where the basic algorithm fails (degrees 1 and 3)...................................... .. 85 5.10 Moving the coarse concentric axis to its refined position............................................... .. 85 5.11 Our algorithm applied to a simple convex curve (degrees 1 and 3)................................. 85 5.12 A more complicated example for degrees 1 and 3........................................................... .. 85 5.13 A sharp polygon for degrees 1 and 3.................................................................................... 85 6.1 The basic idea behind the concentric parameterization................................................... .. 87 6.2 Uniform offsets may introduce crossings.......................................................................... .. 87 6.3 Comparing the medial axis to the potential-based skeleton..............................................88 6.4 A parameterization mismatch resulting from contracting the boundary onto a medial surface. Shown are the six boundary surfaces of a box. The top and bottom surfaces represent identical geometries. However, the control points on the top are not uniformly spaced, resulting in isoparametric lines that are not parallel. Hence, the parameterizations do not match when projected on the medial axis sheet............. 89 6.5 A planar figure with its medial axis, compared with the potential-based skeleton of degree 1, 2, 3 , and 9. Notice the progressive convergence to the medial axis shape. 90 6.6 Configuration for computing the potential due to a polygonal facet............................. 93 6.7 Uniform speed offsets generated inward from the boundary, and the swept surface that results from their blend................................................................................................. 94 6.8 Uniform speed offsets generated outward from the boundary, and their swept surface. 95 6.9 Sweeps are trivially generated from contours as well...................................................... 95 6.10 Inner and outer force lines generated by a potential field............................................... 96 6.11 A potential-based approach requires no tweaking to handle contours. Shown left and right are the same curves with inside and outside reversed..................................... 96 6.12 Because the gradient to the potential field becomes roughly equivalent to the medial axis at higher degrees, we can use the potential (or the nth root of the potential) to determine feature size in much the same way as the medial axis. Note how the magnitude of the potential function evaluated on the skeleton can be used to gauge the size of the corresponding feature (higher values correspond to smaller features). 97 6.13 2D figures illustrating how the index theorem is used to isolate saddle points of the skeleton (see Section 6.3.4.2). The saddle in the middle of the ‘L' is quickly isolated using subdivision. The triangles evaluated to achieve isolation are shown. . 100 x 6.14 A visualization of the tetrahedra generated to isolate the saddles of a 3D potential field. This 3D ‘L' has three saddle points. The three dense clusters in the figure on the right visualize the tetrahedra that are evaluated as shown in Section 6.3.4.2. . . . 100 6.15 Improvement of the concentric axis at graph endpoints to avoid ‘pinching.' ..............101 6.16 Because control polygons converge rapidly to the shape of the curve/surface they describe, so too does the potential skeleton. In the example shown, only one level of subdivision is required to obtain a reasonable skeleton for the continuous curve. . 101 6.17 Every point on the boundary has a corresponding mapping on the axis. The highlighted boundary segments span saddles. The sinks can be thought of as nodes of the graph, and they are connected through saddles, which can be thought of as graph edges. This graph can be used to reconstruct the skeleton as well as to determine the mapping regions. Once the mapping has been determined, offsets can be generated or the surface can be completed. See Chapter 5 for more details. . 102 6.18 A simple 3D shape, shown with its potential field, its extracted 1D potential skeleton, and the sequence of blended polygons that parameterized its interior................... 108 6.19 Simple configurations illustrating potential problems with mapping boundary representations onto a central axis. On the left, we demonstrate how mapping onto a curved midstructure can fail - simply moving the control vertices of the boundary to the axis does not ensure that the resulting curves trace out the same shape. This can result in gaps or overlaps in the completed surface, and the same is true for the volumetric case. On the right, we demonstrate that even linear boundaries can fail to match if inflection points in the skeleton are missed...................................................110 6.20 We require that patches map to either a point or a segment on the skeletal midstructure. In 2D, this results in either a "triangular" or "rectangular" completion surface, respectively. In 3D, surface patches will generate "pyramid" or "wedge" volume completions, respectively. ...................................................................................110 6.21 Demonstration of our algorithm for mapping the innermost offset onto the skeletal axis c. For each row / column of the innermost offset, we will compute a skeletal curve to which it maps. On the left, we consider the jth span of the ith row of the control polygon for a patch in the innermost offset. The function m is our driver function that maps the boundary onto the skeleton. Given the mapping of the jth span onto c, we apply Dijkstra's algorithm to determine the skeletal vertices ak that fall between m(Pi;j ) and m(Pi;j +1). These vertices are added to the control polygon Ci of the skeletal curve for row i. We assign knots corresponding to m(Pi,j) and m(Pi,j+1) using the j and j + 1 nodal values of the innermost offset surface's knot vector, t u. We assign knots to the ak using linear interpolation of the aforementioned nodal values based on the geodesic distance traveled along c, as detailed in Algorithm 8. On the right, we show the result of refining the innermost offset surface to make it compatible with the skeletal curve we built on the left. Postrefinement, we can guarantee that the mapping of the refined jth span onto the skeleton will contain none of the original skeletal vertices, except at the end points (recall, we wish to avoid the situation in Figure 6.19). By building a skeletal curve for every row of the innermost offset, making these compatible, and blending them into a surface, and then repeating for the columns, we ensure that for the resulting blend, no patches violate the mapping configurations indicated in Figure 6.20.......... 114 xi 6.22 Our contraction-based technique, applied to a mechanical part. First, the skeleton of the part is computed, based on its control polygon. Depending on the quality desired, refinement may be used to develop a closer approximation to the surface before skeletonization. This first refinement is merely for the purpose of computing a skeleton, and may be discarded prior to the next step. Next, the contraction paths are computed from the driver function. Based on the user-selected epsilon, refinement may be required so that the innermost offset can more closely approximate the shape of the midstructure. This is an essential tradeoff of our method. (Model courtesy of David Johnson, University of Utah.)............................................................118 7.1 Geometry for surface radiance and light transport. The outgoing surface radiance along a ray L R is a function of both surface position as well as the angle the ray makes with the local coordinate axes. L R can be computed by integrating incoming radiance over the hemisphere Q above the surface, or from a surface integral over all emitting surfaces E .................................................................................. 124 7.2 Tensor product knots and knot lines. (Left) uniform knot spacing. For nonuniform spacing (middle), the knots can be chosen independently in each dimension, but not arbitrarily over the domain. In the case of a nonrectangular domain (right), evaluation nodes can be moved inside the domain...........................................................129 7.3 Angular parameters on the hemisphere. The standard d, $ spherical parameterization results in poor knot spacing (left). Our trimmed square-to-sphere mapping is much more uniform (middle). Nonuniform knot spacing can be used with our mapping, to better approximate a highlight for example (right).................................... 130 7.4 Functions to render a surface point using a B-spline radiance function. The first is more suited to hardware rasterization; the second, to ray tracing.................................. 132 7.5 The global illumination gathering algorithm, and the associated data structures. . . . 134 7.6 A demonstration of glossy interreflections (ray traced using the B-spline shader). . 136 7.7 An anisotropic sphere, a transmissive rectangle, and a glass with a Phong-like transmission (the images are ray traced, using the B-spline shader)............................. 136 7.8 2D B-splines for a diffuse surface with varying degree and knot density. Note the degree 1 case reduces to linear interpolation. The tensor product nature is most noticeable on shadow edges diagonal to the knot lines................................................... 137 7.9 A comparison of tensor product and hierarchical splines. To meet the same error metric (< 1 intensity level change under refined gather), the uniform subdivision tensor product patch (left) requires 264,196 coefficients. The uniformly refined hierarchical representation requires only 17, 781 coefficients. Furthermore, the gather step is roughly an order of magnitude faster for the H-spline. The bottom figure shows knot lines for each..........................................................................................139 7.10 OpenGL renderings of a simple scene (global illumination computed offline). Each object is rendered with 1282 Gouraud shaded micropolygons, with vertex colors from the B-spline shader. The framerate is about 10 Hz................................................ 140 xii 7.11 Indirect lighting in a mug and a cup. The blue mug radiance B-spline has 10 knots in each angular parameter and 41 in the spatial parameters. The green cup exhibiting a caustic has 17 knots in each parameter. The mug is modeled with only two NURBS surfaces; the cup, only one. ....................................................................... 141 7.12 Our (nonhierarchical) method applied to the Cornell Box. The left images, which have untrimmed surfaces, exhibit artifacts where surfaces meet. Trimming has been applied to correct this in the middle images, with essentially no performance penalty. The right images have the knot lines shown in blue. The GI solution, with 3 indirect iterations, was coarsely computed with 15 knots in each of d and 0 for the hemispherical integration. The GI computation time was about 15 seconds on a single processor. Note: our hierarchical approach addresses the problem of representing sharp shadows. Please see Figure 7.9..........................................................142 7.13 A more complex scene, with Bezier patches, NURBS, anisotropic reflection, transmission, and caustics. The source inside the lamp is in the shape of a light bulb, and there is an area source on the ceiling. The Lambertian table top has 180 x 180 coefficients to faithfully represent the shadow edges. The full GI solution is very finely sampled, with 35 x 35 knots for the integration, and required about 20 hours on a single processor. Table 7.1 gives knot vector sizes and memory statistics...........143 7.14 Benchmarks for 4D tensor product B-spline evaluation (single MIPS R12K, 400 MHz). The thick lines show the total computation time per evaluation, including the hemispherical mapping. The thin lines show the time of computing the B-spline basis functions. Three different degree combinations are shown, with degree in u and v first. The horizontal labels indicate the number of knots in each parametric direction..............................................................................................................145 xiii LIST OF TABLES 3.1 Statistics for our technique. "Light BV intersections" are generated by casting shadow rays and are treated (and measured) separately from ordinary BV intersections. "NURBS tests" give the number of numerical NURBS surface intersections performed. "Total NURBS time"and "Avg time per NURBS" give the total and mean time spent on numerical surface intersections, respectively. "NURBS hits" denote the number of numerical intersections which yielded a hit. "Reported hits" give the number of successful numerical hits which were not eliminated by trimming curves or by comparison with the previous closest hit along the ray........... 51 7.1 Details of the B-spline radiance representations used for the table scene (Figure 7.13), giving the degrees in the spatial and angular dimensions, the number of coefficients, and the total memory required for each object. These numbers are for the basic (nonhierarchical) approach with uniform knot lines................................. 142 ACKNOWLEDGMENTS I am indebted to many people who have aided and encouraged me during the course of my graduate career. First, I would like to thank my committee, whose patience was truly tested during my rather extended tenure. I would particularly like to express my gratitude to Elaine Cohen, who has generously given of her time, knowledge, and experience in helping to form the ideas in this dissertation. Pete Shirley has also provided a wealth of advice and timely humor, along with a finely honed research barometer. Finally, I would like to thank Rich Riesenfeld, Ross Whitaker, and Gershon Elber for their insightful questions and recommendations. I have been blessed with many excellent role models from the graduate program. Gordon Kindlmann, Steven Parker, Simon Premoze, Michael Stark, and David Weinstein have provided shining examples of how to think critically and pursue research of the highest standards. I am grateful to Amy and Bruce Gooch for their boundless creativity and enthusiasm, and their faithful friendship. Special thanks are due to Karen Feinauer in the School of Computing who constantly encouraged me to finish this degree. I would also like to thank my many friends at Visual Influence and Numira Biosciences who made working at a startup rewarding and a compelling distraction from my dissertation. I could not have begun this work, much less finished it, without the love and support of my parents, Mike and Julia Martin. Finally, and most especially, I would like to thank Graham Gautschi for his support, encouragement, and love that made the journey worth traveling. CHAPTER 1 INTRODUCTION AND AIMS Nonuniform rational B-spline (NURBS) curves and surfaces form the foundation for many geometric modeling systems. There are a host of reasons for this. First, because splines contain the space of polynomials, they are flexible enough to describe complex functions. On the other hand, they are coordinate independent, meaning affine transformations do not alter qualitative shape. Further, a relatively small number of parameters are required to specify complex behaviour. Shape interpolating B-splines provide the designer a few descriptive handles (e.g., control vertices) for easy modification of surface properties. Continuity between adjacent patches is ensured by the representation, and its degree is adjustable via the knots [3]. Relative to other smooth representations, NURBS also exhibit fast evaluation due to the local extent of their basis functions. Similarly, the impact on the model of modifying the control points is localized. The variation diminishing property of B-splines says, loosely, that a curve/surface will exhibit no more variation (oscillation) than its control polygon - that is, the B-spline function acts as a low-pass filter on its mesh. The convex hull property of B-splines allows us to intuitively isolate the location of a NURBS curve or surface in space and the refinement property facilitates the addition of control points without loss in precision. Finally, the B-spline basis possesses many useful integral and differential properties, some of which we will leverage in Chapter 7. Despite these attractive features, splines have not been traditionally embraced by the engineering community at large. Downstream from the modeling phase in the design cycle, approximations in representation are frequently made, be it polygonization in rendering or the introduction of tetrahedra or hexahedra in a finite element simulation. Furthermore, traditional computer-aided design targets surface representations, whereas simulation and analysis frequently require volumetric models. Recent research in a new field called isogeometric analysis has demonstrated that there can be significant advantages to leveraging the same representation throughout the design cycle [4]; that is from modeling, to simulation, to manufacture. The fundamental premise is that there is something in the original NURBS representation (or a smooth representation) that is worth preserving. With change in representation comes 2 the potential for approximation error. And because design is a feedback loop, be it lighting design or automobile design, it is necessary to relate results back to the original model so that adjustments can be made. It all comes back to the notion that defects in your computation ought to be due to fundamental aspects of your problem and not features introduced by your approximation. Our goal in this work has been to develop techniques that make the native NURBS representation amenable to problems in engineering. Among the fields we target are modeling, medical visualization, manufacture, and rendering. Our key aims in this work will be 1. Developing optimized methods for NURB evaluation. 2. Facilitating direct rendering of NURBS for visualization. 3. Extending surface-based modeling and rendering approaches to volumetric NURBS. 4. Providing methods for converting boundary representations into volumetric models for analysis. 5. Demonstrating how the approximational power of splines can be leveraged to solve a complex problem in engineering. 1.1 Organization To this end, the dissertation is organized in the following way. In Chapter 2, we list some of the problems that have driven our work and discuss the previous research that has influenced our own. Next, because visualization is often key to problem exploration, we derive in Chapter 3 the necessary machinery for ray tracing NURBS directly. Part of this derivation yields a highly optimized evaluation algorithm. All of the details required for building a NURBS rendering system are provided, as well as indications of common pitfalls. In Chapter 4, we introduce the mathematics for volumetric NURBS. We make this representation amenable to common scientific computing and engineering tasks by providing methods for fitting them to data, modeling with them, and imbuing them with attribute data. We also develop methods for visualization and introduce a novel technique for optical path tracing. Chapters 5 and 6 provide operators for converting existing boundary representations into true surface and volumetric ones. They have the advantage of preserving the original boundary parameterization, a trait prescribed by the field of isogeometric analysis for many types of finite element analyses [4]. In Chapter 7, we demonstrate the approximation power of splines for capturing functions on manifolds, generalizing the attribute modeling techniques given in Chapter 4. Our example application characterizes 3 the radiance function for a global illumination simulation using a four-dimensional (4D) spline. Finally, we close with a brief recap of our contributions in Chapter 8. CHAPTER 2 BACKGROUND This dissertation relies on results from several areas of computer science, mathematics, and engineering. In this section, we briefly summarize some applications that have motivated us to pursue this work, as well as work related to ours in the field. 2.1 Motivating Applications There are a number of fields that we believe could benefit from spline-based representations. We now survey some of these that have motivated the present work. 2.1.1 Scientific and Engineering Analysis Scientific simulations often involve characterization and analysis of volumetric phenomena. Fluid dynamics simulations commonly take into account variables such as pressure, density, temperature, and velocity, which can vary continuously. Stress and fracture simulations may track force, density, stress, and deformation. For the field of molecular dynamics, isosurfaces of equal electrostatic potential provide an informative visualization. In the area of meteorology, there are a number of quantities which are critical to weather and climate prediction. Among these are temperature, wind speed, barometric pressure, pollutant density, molecular concentration, and humidity. These typically vary globally and with altitude, indicating a volumetric model. Simulations are used for daily weather forecasting as well as predicting the incidence and behavior of such extreme weather phenomena as hurricanes and tornadoes. Meteorological variables also have a direct impact on atmospheric optics. Localized changes in temperature and density are responsible for mirages. Due to a varying atmospheric index of refraction, light is guided along a curved path, making objects appear to be where they are not. The techniques we discuss later for lens modeling apply equally well here. Over a larger scale, similar phenomena are responsible for the twinkling of the stars, the colors in a sunset, color shifts due to atmospheric perspective, and have a direct impact on the quality of astronomical 5 observations. Whereas splines have a wide array of advantages when applied to functional approximation, they have not historically been leveraged to model and track these scientific variables. A key reason for this has been the historical perception that they are not suitable for simulation and analysis. In the area of engineering design, models must frequently be synthesized and then analyzed. Computer-aided design (CAD) programs are usually employed to build the model, leveraging an underlying spline- or subdivision surface-based representation. The reasons for this representation are many, and they are well-documented in the literature [5]. This model must then be handed to a different team of engineers for analysis. Consider for example the aerodynamical analysis for a new air foil on a plane. In some situations, such as automotive design, there are aesthetic as well as fluid dynamics factors to be considered when building a part. The analysis phase frequently involves the transformation of the smooth model into a representation that is amenable for finite element analysis (FEA). The goal of FEA is to determine how a quantity such as heat or stress varies throughout the model, as these can indicate the potential for failure or unwanted side-effects such as turbulent flow. Hence, a smooth surface-based representation is traded for a tessellated volumetric one. However, the process of design is a feedback loop. Results from the analysis phase must be applied to the design step so that modifications can be effected. The translation between representations can introduce errors into the feedback loop. Also, simulations can be particularly sensitive to minor deviations in domain geometry. Recent trends taking place in engineering analysis and high-performance computing are ... demanding greater precision and tighter integration of the overall modeling-analysis process. We note that a finite element mesh is only an approximation of the CAD geometry, which we view as ‘exact.' This approximation can in many situations create errors in analytical results. The following examples may be mentioned: Shell buckling analysis is very sensitive to geometric imperfections, boundary layer phenomena are sensitive to the precise geometry of aerodynamic and hydrodynamic configurations, and sliding contact between bodies cannot be accurately represented without precise geometric descriptions. Automatic adaptive mesh refinement has not been as widely adopted in industry as one might assume from the extensive academic literature, because mesh refinement requires access to the exact geometry and thus seamless and automatic communication with CAD, which simply does not exist. Without accurate geometry and mesh adaptivity, convergence and high-precision results are impossible ([4, p. 2]). The mesh translation process itself can be extremely costly - as much as 80% of the time spent in the analysis pipeline can be spent constructing geometry in a form that is amenable to processing [4]. This indicates that contrary to popular opinion, meshing is far from turnkey. 6 The recent area of isogeometric analysis has demonstrated advantages in leveraging a single NURBS-based representation in both the modeling and analysis phases of design [4]. While spline elements require more expensive evaluation than traditional basis functions, they make up for this in the quality of their results, their greater expressiveness in functional representation, and their lesser susceptibility to noise [4]. The problem remains that CAD-based modeling is a primarily surface-based endeavor. In Chapter 4, we introduce techniques for applying trivariate splines to problems in engineering. We provide methods for fitting volumetric NURBS to data and introduce modeling operators amenable to integration in CAD programs. In order to support modeling with heterogenous materials (discussed in Section 2.1.2.1), we introduce an attribute-based model which can support, among other things, material composition. We also introduce visualization and rendering operators to support the use of trivariate NURBS, and we conclude by modeling a GRIN lens and rendering its appearance. In Chapters 5 and 6, we introduce methods for upgrading CAD boundary representations (B-Reps) into truly volumetric representations. One of the key aspects of our approach is that it preserves the parametrization of the original surface-based model. This means that values determined in the analysis process can be related directly back to the source model, decreasing the potential for error in translation. 2.1.2 Novel Materials 2.1.2.1 Functionally Gradient Material (FGM) Manufacturing One of the most exciting developments in modern manufacturing has been the emergence of functionally graded materials (FGMs). FGMs are composites with the interesting property that the proportion of each constituent material can be varied continuously. Thus, for example, a turbine blade may have a steel (temperature resistant) edge and an aluminum (lightweight) interior with a smooth blend between the two. The gradient boundary between materials improves the wear life of the part, as failures tend to occur at discrete material boundaries (typically, on the side of the softer material). With the improved failure resistance of graded materials, turbine blades can be made lighter, thereby improving their efficiency. In fact, the concept of graded materials can be traced back to feudal Japan, where, analogously, the blades of swords possessed a soft but tough core, and a hardened edge [6]. Furthermore, many natural structures such as bamboo, plant stems, and bone are graded and provide strength in areas of high stress [6]. There are a number of competing technologies for manufacturing using functionally gradient materials. Among them are molecular beam epitaxy, vapor deposition, three-dimensional printing 7 (3DP), bulk and surface micromachining, and lithography. For the rapid-prototyping community, three-dimensional printing may be most familiar through analogy to stereolithography. To prepare a model for stereolithography, first the model is sliced into parallel layers of a prescribed thickness. The lithography machine then acts as a kind of inkjet printer, laying down a layer of material as specified by the bottom-most slice. Subsequent layers are laid down similarly until the part is completed. M.I.T.'s 3DP process [7] takes the inkjet analogy one step further by allowing mixtures of materials to be laid down in each layer (see Figure 2.1). Thus, in theory, gradient materials can be made from any material that can be reduced to a powder. FGMs pose exciting possibilities for the future of manufacturing, since in theory, materials can be specifically tailored to function. This will in turn have an impact on approaches to modeling. For example, techniques such as pocketing to reduce weight in parts may become unnecessary as heavy materials can be graded to lighter ones. There are important implications to graphics research as well. If materials can be accurately tailored, then their corresponding reflectance properties can be collected in a database, allowing for precise renderings of parts from their modeling software descriptions. In Chapter 4, we introduce methods for augmenting traditional modeling systems with data structures and modeling operators required to fit and model with these materials. 2.1.2.2 Optics As this dissertation is being written, novel lens technology is revolutionizing the optics community. So-called GRIN (GRadient INdex) lenses are unique in that their index of refraction is designed to vary continuously across the lens (see Figure 2.2). One implication is that lenses made from gradient material can be milled flat, and still focus light. A consequence is potentially Figure 2.1: Diagram summarizing the three-dimensional printing technology (courtesy of MIT 3D Printing Lab - h t t p : / /w eb . m i t . e d u /td p /w w w /w h a ti s 3 d p . h tm l). 8 TM Figure 2.2: Examples of commercially available Gradium GRIN lenses (courtesy of LightPath Technologies - h t t p : //w w w . l i g h t p a t h . com). higher accuracy in lens fabrication, since gradient materials may be more precisely tuned than the traditional grinding process would allow. Typically, GRIN lenses have an index of refraction that varies radially from a central axis, although this need not be the case. In reality, GRIN technology is not new. Every fax and copy machine contains an array of gradient lenses. The technology has also been used in fiber optic cable (see Figure 2.3). However, recent breakthroughs in fabrication technology have allowed GRIN lenses to be made to higher accuracy, at very large and very small sizes, and from a variety of materials. These advances have made GRIN lenses suitable for a range of applications, including solar power collection, digital communications (wave division multiplexing), medical instrumentation, and astronomical optics. In Chapter 4, we introduce methods for modeling and visualizing these lenses. Figure 2.3: Light travels along a curved path in a medium with graded refractive index. This property is used in multi-mode optical fiber to reduce 'modal dispersion.' Image courtesy of Marc Achermann, Lucerne University of Applied Sciences and Arts, 6048 Horw, Switzerland. 9 2.1.3 Medical Visualization In the area of medical diagnostics, volumetric data are commonly acquired through magnetic resonance imaging (MRI) or computed tomography (CT) scans. We briefly summarize how these machines work to impart some understanding of the quantities they record. An MRI machine (Figure 2.4) is essentially a machine capable of generating a strong magnetic field. When exposed to this field, the hydrogen atoms of the body tend to fall either into alignment or anti-alignment with the direction of the field. The machine generates an image by casting radio waves at a hydrogen-specific frequency toward the center of the measurement apparatus. This causes the atoms to precess (rotate 90 degrees) and generate a corresponding signal which can be recorded by the machine. Different tissues of the body contain different densities of hydrogen atoms, and the strength of the signal emitted is proportionate to this density. Thus, tissues can be differentiated in the resulting 3D photo. A CT machine is a specialized form of X-ray machine. X-ray machines work by casting X-rays through the object of interest. Since different kinds of tissue absorb X-rays in different degrees, X-rays can be used to expose structures within the body. The CT machine has an X-ray device which rotates about the object, and produces slices of data, which can be used to reconstruct a 3D image. The output of both machines is three-dimensional volumetric data which are indicative of localized tissue type (see Figure 2.5). In Chapter 4, we extend traditional methods for scientific data fitting and visualization to volumetric NURBS. 2.1.4 Realistic Rendering The area of computer graphics has long been dominated by triangles. However, there are many reasons why it is a good fit for the application of spline-based techniques. First, because Figure 2.4: An MRI machine. 10 Figure 2.5: Visualization of CT data from the National Library of Medicine's Visible Human Project® (image courtesy of Steven Parker, University of Utah). many of the models in production scenes are generated in CAD software, the models are typically spline (or subdivision surface)-based. As with finite element analysis, the initial step in the rendering process is often to convert smooth models into a triangular mesh. The reasoning has been that triangles are simpler to render, and because all smooth representations admit a tessellation [8], triangles become the common currency across rendering systems. Shortcomings of this approach are precisely parallel to those mentioned for isogeometric analysis. The models must frequently be tweaked in response to lighting conditions as well as the aesthetic opinions of the director (or engineer if we consider automotive design). This requires access to the underlying models. A solution employed by Pixar (in their Reyes architecture [9]) has been to delay tessellation until the final rendering step, tessellating to the granularity of a pixel and beyond in order to avoid visible seams, jaggies, and other artifacts of tessellated models. This approach is not generally applicable to interactive rendering as each rendered frame will entail many millions of polygons, composited in different layers, etc. Microtriangles also are not a programmatic simplification, as the rendering program must still deal with smooth surface representations and dice them into microfacets for the rendering step. If tessellation is applied earlier, one instead deals with the potential for visible seams - which can be magnified by shadows, specular highlights, and the presence of optical elements - as well as an explosion of data (and the resulting memory utilization) which often accompanies the approximation of curved surfaces using piecewise planar elements. Finally, we note that in recent years, ray tracing has become a legitimate alternative to raster-based rendering. 11 Particularly as the size of the models has increased, and it has done so exponentially [10], ray tracing has become attractive because its complexity scales in the number of pixels, and not in the number of elements. Because of the efficiency by which the ray tracing paradigm solves the visibility problem, the expense of evaluating primitives such as NURBS has become less of a concern. In Chapter 3, we have developed a system for directly rendering NURBS surfaces which is optimized, general, and supports implementation within a parallel rendering system. We have successfully integrated our work into one such system, with promising results [1]. The global illumination problem (which we introduce in more detail in Chapter 7) aims to achieve photorealistic rendering of a scene by simulating how light interacts with, and is retransmitted by, interfaces in the scene. The radiance function, which captures this notion, is a function on manifolds - in its most general form, a function of (volumetric) position, wavelength, viewing angle, and time, a seven-dimensional function. However, in the absence of intervening media (fog, atmospheric dispersion, etc.), it is frequently framed as a surface-based function. The solution to the steady-state global illumination problem is often phrased as a finite element problem [11-13]. As we have discussed previously, this problem has historically been dominated by triangular mesh elements. However, smooth representations have all the advantages given in the preceding paragraphs, including potentially fewer elements. The basis functions used to represent radiance have traditionally been orthogonal functions which simplify quadrature of the global illumination problem - among these, spherical harmonics and Haar wavelets. A downside of these basis functions is that they can introduce oscillations in their reconstruction. One of the attractive aspects of spline approximation is that it can be constructed so as to avoid undue oscillation. Splines also possess integral properties which make them ideal candidates for quadrature. And intuitively, we observe that the radiance in most scenes appears to be smooth over large areas - and splines are nearly ideal for capturing such smooth functions. Obvious counterexamples include sharp shadows, specular reflections (caustics), and areas of high textural frequencies. In Chapter 7, we generalize our attribute-based model from Chapter 4 to capture the view-dependent radiance function as a spline. We demonstrate how spline-based approximation techniques can be leveraged to simplify the integration of the global illumination equation, in the process demonstrating quadrature techniques which are readily amenable to other classes of engineering analysis problems. We generalize the gathering/scattering approach of radiant transfer to solve the global illumination problem, demonstrating how splines can be applied to FEA 12 problems outside the field of isogeometric analysis and we posit that our solutions to the rendering equation take the same form. We have implemented our spline-based radiance function in a real-time rendering system, allowing the radiance function to be interactively computed for novel views via a simple texture evaluation (see Chapter 7). 2.2 Related Work Much work has preceded ours. The organization of this section mirrors the organization of the rest of our document, and details the work we consider most related or relevant to ours. 2.2.1 Ray Tracing NURBS Surfaces/Volumes Several of the techniques we shall present in this dissertation entail ray tracing a trivariate NURBS volume. Such a technique requires ray intersection with the solid boundary and subsequent traversal of its interior. Since the parametric faces of a trivariate volume are themselves NURBS surfaces, we shall therefore require the capability to ray trace NURBS surfaces. Techniques for ray tracing NURBS can be divided into two broad classes - those that tessellate (e.g., [14-18]) and those that work directly with the underlying representation (e.g., [19-27]). The former are far more prevalent in commercial software systems. The techniques we propose are of the latter sort, for reasons we shall discuss in more detail later. We shall refer to these techniques as direct methods. The seminal article on ray tracing parametric surfaces is that of Kajiya [22]. His method uses the theory of resultants to reduce the problem of ray tracing patches to that of finding the simultaneous roots of two parametric curves. Toth [26] introduces an intersection technique based on Newton-Raphson root finding. His algorithm subdivides the surface into intervals such that the Newton iteration is guaranteed to converge to a root, if one exists, from any start value in that interval. This guarantee is made, however, at the price of linear convergence, as his analysis only holds for a Newton interation with a fixed Jacobian (so-called linear Newton). These two articles by Kajiya and Toth contain the major ideas upon which most subsequent NURBS ray tracing papers have built. The basic operation of a NURBS intersection routine can be summarized as follows: • As a preprocess, refine the surface to facilitate ray culling using a hierarchy of bounding volumes. For Newton-type iterative schemes, this step will also determine good start values to ensure swift convergence. • At run time, use the bounding volume hierarchy (BVH) to limit sections of the surface 13 under consideration, and apply the more costly patch intersection routines only if a ray successfully traverses to a leaf. We summarize briefly the contributions of several other authors. Fournier and Buchanan [21] introduce Chebyshev polynomials and demonstrate that tight bounding volumes can be obtained directly from the coefficients of a Chebyshev bilinear patch. Their approach subdivides a surface patch until the subpatches can be well approximated by a bilinear representation. The bounding volumes are generated bottom-up using the coefficients of the Chebyshev representation, and intersections with the bilinear patches can be found exactly via a quadratic equation. Another novel approach, introduced by Nishita et al. [24], uses a technique called Bezier clipping to iteratively remove regions of the patch which do not intersect the ray. Their treatment also handles trimming curves using a technique of point classification, but requires that patches be represented in the Bezier-Bernstein basis. There are a number of Newton-based techniques, which differ mainly in the way they form the bounding volume hierarchy and the manner in which initial values are chosen for the iteration. Barth and Stiirzlinger [19] subdivide the NURBS surface until each subpatch can be approximated by a parallelogram. The hierarchy of patches is bounded by parallelipipeds. If a ray is not eliminated by intersections with the bounding volume hierarchy, then at the leaf, it is intersected with the approximating parallelogram. This intersection yields a start value for the Newton iteration on the leaf surface patch. Sweeney and Bartels [25] suggest refining the NURBS surface until the screen projection of each mesh facet is less than a few hundred pixels, and until the knot they associate with each control point constitutes a good starting value for the Newton iteration. The leaves of their bounding volume hierarchy are the axis-aligned rectangles which contain a particular refined control vertex and its 4-connected neighbors. If a ray succeeds in reaching a leaf node, a Newton iteration is started using the knot value associated with the vertex. Yang [27] uses a modified spatial subdivision scheme to generate a BVH. First, points are generated on the surface using a user-specified parametric step size. An axis-aligned bounding box is produced for these points. Next, the bounding box is subdivided into 16 sub-boxes. Subboxes containing none of the surface points are discarded. If the limit depth has not been reached, the procedure is repeated for each remaining sub-box. If the limit depth has been reached, the parameter values for the points in the box are averaged to produce the start value for the Newton iteration. Lischinski and Gonczarowski [23] introduce a number of efficiency improvements to the NURBS ray tracing technique. Among these are development of a hybrid of BVH and spatial subdivision schemes, search tree caching for Toth's method, improved screen sampling order 14 based on the Peano curve, and a treatment of secondary rays which avoids trivial intersections at the ray origin. 2.2.2 Volumetric Representations and Techniques We have noted that there are a variety of applications for which generating, analyzing, and visualizing volumetric quantities is key. This conclusion motivates the development of volumetric models which encapsulate the behavior of such systems. A number of representation techniques have been developed for volumetric data. The most commonly encountered of these is the voxel representation, which has become standard among the medical visualization community. One reason for this is that scanned data are typically captured in regularly spaced slices, each slice composed of a rectangular grid of values. The samples provided by the imaging machine must be reconstructed in order to generate a continuous volumetric function. One commonly used filter is a trilinear interpolant. Among the scientific community, finite elements are a common tool for analysis. Finite elements (e.g., tetrahedra) provide simplified connectivity information, allowing attributes, such as stress or temperature, to propagate across the volume. They are well suited to iterative schemes, and are commonly used in the solution of ordinary and partial differential equations. Both voxels and finite element meshes can be seen as spatial subdivision structures, which differ in regularity and shape. Within the modeling community, volumes have been traditionally represented by their boundaries. Such representations include Bezier surfaces, NURBS, Steiner patches, Coons patches, and Hermite patches, among others. This has proven sufficient for models with homogeneous interior. However, the desire to apply proven modeling techniques to novel technologies, such as functionally gradient materials and GRIN lenses, has led to the development of higher dimensional parametric formulations. It is these representations which are most closely related to the present work, and we now discuss them in greater detail. Farouki and Hinds [28] present a nice overview of parametric modeling techniques, including the generalization of bivariate NURBS surfaces to trivariate NURBS volumes. In the same year, Lasser [29] explored the Bernstein-Bezier volume representation, and extended techniques for evaluation and interpolation to them. Traditional modeling operations, such as ruling, extrusion, and revolution, were extended to trivariate NURBS by Paik in [30]. She also allowed volumes to represent surfaces which evolve through time by designating one coordinate as a temporal axis. A final contribution of that thesis was a physical simulation which incorporated a system of springs at the vertices. Madrigal and Joy [31] develop an algorithm for determining the boundary 15 of a trivariate solid that is swept through space. The method employs the results of Joy and Duchaineau [32] for determining the boundary of a trivariate solid, and then performing a sweep of that boundary to determine the resulting sweep envelope. As an alternative to parametric modeling, Wang and Kaufman [33] present a technique for volumetric modeling which is analogous to sculpting. A voxel representation is employed, and material can be removed or have its attributes (e.g., color) modified using a variety of tools. Historically, a number of papers have developed volumetric representations which incoporate attribute data and geometry. In a visionary 1977 paper, Stanton et al. [34] describe a batch system which uses a tricubic Hermite volume for stress analysis. Their results lead them to conclude that parametric volumes are "an important new analytical tool for solids of composite material." In a followup article, Casale and Stanton [35] develop a system of modeling volumes with carried attribute data. Their representation is tricubic, augments the geometry (vertices) with "data entries," and supports finite element analysis. Yen [36] generalizes this result with a scheme for performing finite element analyses on trivariate NURBS volumes. In his method, the volume is divided into regions, and attributes can be specified for each region. Subsequent meshing is performed to achieve a given simulation error tolerance. Yen further introduces a Boolean sum operator to generate volumes from boundary surfaces, in a spirit similar to the Coons patch. Dickinson et al. [37] use a B-spline volume to represent scalar and vector fields. Again providing a worthy foil, Kaufman [38] has combined modeled objects and measured data within a voxel-based volume visualization system, along with an algorithm for scan converting tricubic Beziers. More recently, there has been a resurgence in interest in volumetric representations of heterogeneous materials, particularly as the applications of medical imaging and FGM manufacture become more common. Bonnell et al. [39] give a technique for determining discrete material interfaces for objects represented in a voxel formulation, provided that for each voxel, the proportion of volume filled by each material to total voxel volume is known. The result of their algorithm is a triangulated approximation to the interfaces between materials. Raviv and Elber [40,41] employ trivariate B-splines for representing scalar fields and provide efficient algorithms for multiresolution sculpting using their coefficients. Park and Lee [42] use a rational NURBS volume to model fluid flow data. They present a data structure which couples attributes (such as flow density and flow velocity) to geometry and develop methods for nodal interpolation to the data. Kumar and Dutta [43] develop an abstract mathematical representation by extending r-sets to 16 multimaterial objects. r-sets are an established mathematical representation of solid models [44]. However, they are limited to representing geometry. Their article extends the r-set to a new representation which contains both geometric and material information, called the rm-set. The approach entails generalizing the traditional CSG operations to the new representation. The article also introduces a software framework for representing these models. The models presented in this work are limited however to objects with discrete boundaries and a single material per domain. The authors correct this limitation in a followup article [45] which generalizes rm-sets to handle funtionally graded materials (FGMs). Again, the CSG operations are extended to the new rm-set representation. However, now care must be taken to assure that material fractions remain normalized. A software framework is introduced and a simple FGM object is generated using the aforementioned representation. Application to layered-manufacturing techniques is discussed. Kumar et al. [46] provide a good summary of the various mathematical representations for models found in the literature. They then proceed to generalize the notion of an rm-set to that of a fiber bundle. Each material is a manifold in this representation. The CSG operations are extended to this ''trivial" fiber bundle and an augmented software framework is introduced. Marsan and Dutta [47] apply the preceding theory to the class of trivariate parametric functions. The traditional trivariate NURBS formulation is extended to represent material composition. This is done in one of two ways. The material can be represented as an explicit function over rectilinear coordinates (x, y, z), which also serve as the parametric domain. This is similar to the voxel formulation traditionally applied in the visualization literature. On the other hand, the coefficients of the trivariate volume can be extended to contain material information, resulting in a true parametric volume formulation, with attributes coupled to geometry. Methods of evaluation and data fitting, as well as applications to layered manufacturing, are discussed. There have been several articles from M.I.T. supporting the use of heterogeneous materials in manufacture, with particular application to their 3DP technology. Liu et al. [48,49] present a finite element-based system for designing FGM solids and develop an efficient distance transform so that composition can be automatically specified as a function of boundary proximity. They also produce a method to efficiently evaluate material composition at a point. Jackson [50] provides an extensive survey of volumetric techniques as applied to the problem of FGM modeling and local composition control. He provides analysis of the techniques in terms of memory utilization, and finds that cellular techniques afford the greatest efficiency. Cellular techniques decompose the geometry into cells, over which material properties can be defined. Among the region types 17 he introduces is a trivariate Bezier patch, with attribute data decoupled from geometric data, in the same spirit as the formulation we propose here. 2.2.3 Volume Visualization In this dissertation, we will be describing a data structure which associates geometry and attributes in a single model. Such representations are by their nature high-dimensional, and it is typically difficult for designers and analysts to reason about this space. Therefore, techniques for encoding information in visualizations will play a key role in making these models usable and intuitive. A number of researchers have developed approaches to this problem, and we will leverage many of their ideas in our present work. Perhaps the best known volume visualization algorithm is the so-called Marching Cubes technique for constructing isosurfaces, introduced by Lorensen and Cline [51]. For a given voxel and a particular isovalue, each vertex of the voxel is labelled with a 0 or 1 according to whether its attribute value is above or below the value sought. There is only a small set of ways a given isosurface can pass through a voxel, and the vertex labeling is used to lookup the tessellation for each corresponding configuration. The algorithm then marches forward to the next cell. Isosurface rendering is one approach to extracting structure in volumetric datasets. Another technique is assigning colors and opacities to materials, and pulling out structural detail using traditional graphics shading models (see Figure 2.5). Such approaches may be classified under the moniker "direct volume rendering." Drebin et al. [52] introduce a technique for volume rendering which operates on data decomposed into material percentage volumes. Each material percentage volume tracks some material of interest, e.g., skin or bone, its data values indicating the proportion of the material present in a given voxel. From this representation, the user associates a density with each material, and gradients can be calculated based on how quickly materials transition. Likewise, colors and opacity are associated with materials, and this together with gradient information is used to shade the volumetric data. Shirley and Tuchman [53] provide a clever method for direct scalar volume rendering of datasets represented in a tetrahedral format. First, tetrahedra are classified according to their face orientation relative to the viewer. The tetrahedra are projected onto the viewing plane, and broken into a set of triangles, based on their classification. Ray integration is calculated at the thickest point of ray-tetrahedron traversal, and linear interpolation is used to generate the brightness and opacity across each triangle. The technique produced renderings of high quality with an order of magnitude increase in performance over ray traced techniques. 18 Nonetheless, given its simplicity, ray tracing is an attractive option for direct volume rendering. Levoy [54] describes methods for efficiently ray tracing volumetric data. The two techniques focused on are hierarchical gridding of data to speed ray-grid traversal, and adaptive ray termination based on accumulated opacity. Max [55] provides an overview of optical methods for direct volume rendering, with particular attention to multiple scattering effects, such as are seen in clouds. There have been some techniques targeted specifically to rendering parametric volumes. Dickinson et al. [37] and Park and Lee [42] present methods for generating isosurfaces and streamlines for trivariate scalar and vector fields to effect feature segmentation. Raviv and El-ber [41] achieve interactive visualization of complex scalar fields by fixing the viewpoint and preintegrating the B-spline functions along a line of sight. The user can freely modify the scalar coefficients and see the results in realtime. Chang et al. [56] develop a scanline technique for direct rendering of trivariate volumes using line-of-sight integration and compositing. Madi and Walton [57] allow visualization of multilayer objects using the concept of cuboids. Objects are discretized into layers of cube-like objects, allowing layers to be easily peeled away for display. Joy and Conkey [58] apply the results of [32] to visualize the envelope of a swept trivariate B-spline solid. The crux of the technique is to generate the trivariates formed from the parametric boundary of the surface and union them with the solid generated by the swept implicit boundary. Recently, Martin et al. [59] have employed a frustum-based approach to rendering the isosurfaces of trivariate volumes. Their technique leverages the subdivision and convex-hull properties of splines along with an optimized oriented bounding box hierarchy to expedite the isolation of isosurface-frustum intersections. A further advantage of their technique is that it trivially affords rendering the underlying geometry in the context of the isosurface. 2.2.4 Medial Axis Transforms This dissertation will introduce a new modeling operator which is based on the concept of a medial axis and an associated generalized cylinder. The notion of a medial axis was formalized by Blum [60] and later reformulated using his famous "grass fire analogy" [61]. Given a closed curve in the plane, the medial axis of the curve can be found by setting a fire along the perimeter, and allowing the fire to propagate inward at a fixed rate. When two fire fronts collide at so-called "quench points," they are extinguished. The collection of all quench points is termed the medial axis. Other definitions are possible, as well. The medial axis can be seen as the locus of the centers of all maximal circles which are inscribed in a closed curve, and touch it at two or more points. It 19 can also be found contained in the Voronoi diagram of the curve boundary. The medial axis can be elevated to higher dimensions, by generalizing the grass fire analogy to a boundary surface, the maximal circle definition to spheres, and the 2D Voronoi to 3D. An object can be completely reconstructed from the maximal circle definition provided that the radius of the maximal sphere is recorded for each point on the axis. See Figure 2.6 for an example of the medial axis transform applied to a planar figure. The medial axis in some sense forms the backbone of the object from which it is derived. This has made it popular for shape recognition in the vision community and as a handle for character manipulation in the animation community. The axis has also been applied in the field of robotics for collision avoidance. By following the medial axis, the robot is roughly speaking as far as possible from obstacles on the perimeter. There are a number of problems with the medial axis, however. It is notoriously slow to compute, numerically unstable, and very sensitive to noise. The slightest perturbation in a boundary will cause a spike whose magnitude is uncorrelated to the degree of perturbation. This has led Chuang [62-64] to develop an approximation to the medial axis which is less sensitive to noise, and easier to compute as well. Ahuja and Chuang [62] develop a potential-based model for approximating the medial axis of a 2D polygonal region. Their idea is to treat the boundary of the region as possessing a charge. The force acting on an oppositely charged point in the interior can be calculated by integration over the boundary. This force reaches a local minimum when the particle is approximately equidistant from two boundaries. Thus, valleys in the potential function correspond roughly to the medial axis. Chuang [63] generalizes the potential approach to 3D for the purpose of obstacle avoidance and later [64] applies the technique to skeletonization of three-dimensional polyhedral objects. In three dimensions, the potential-based skeleton does not converge to the Figure 2.6: Medial axis of a planar figure. 20 medial axis. Chuang's potential-based skeleton remains a space curve, whereas the medial axis is in fact a series of joined surfaces, termed "medial surfaces." We shall discuss the exact form of the potential-based approach in later chapters. There are many other excellent skeletonization algorithms. In particular, we have considered two. Lee et al. [65] modify the erosion operator to thin a voxelized model while preserving component connectivity to yield a thin skeleton. Cohen-Or et al. [66] employ a sequence of repulsive forces and edge collapses to produce a skeleton from a model. A number of articles have further motivated our research. Yao et al. [67] present a simple method for computing the medial axis of a polygon and Chin et al. [68] manage it in linear time. Wolter [69] discusses the role of the medial axis in shape representation, and explores the relationship between the homotopy type of a solid and the homotopy type of its medial axis. Amenta et al. [70-72] develop methods for surface reconstruction from unorganized data by calculating the approximate medial axis of the point cloud. 2.2.5 Surface / Volume Completion Surface completion is the process of "filling in" a surface from its boundary representation. As such, it bears some similarity to the problem of hole-filling, for which several methods have been introduced (e.g., [73]). Often these techniques do not address the parameterization of the filled region. When the parameterization is addressed, it is often piecemeal, composed of a series of adjacent parametric patches. There are a number of classic works on completing a surface from a series of bounding curves [74-77]. This work is most closely related to the algorithm we will develop. However, in contrast to our approach, these techniques generally assume the boundary can be naturally decomposed into n-faces, which can in turn be blended together. For example, schemes have been developed to complete a surface from 3, 4, 5, and 6-sided areas. There are many commonly occurring curve examples that do not easily admit such a decomposition Similarly, volume completion builds a volume representation from a surface boundary representation. While this is not a new problem, the goal of completing a NURBS boundary while preserving its parameterization is relatively new. It is a point of view that has become more popular with the advent of the field called isogeometric analysis [4], as discussed above. The work of Martin et al. [78,79] is most similar to ours, starting with a family of offsets projected inward towards a central midstructure. Their work diverges from ours on the question of how to deal with the difficult area near the axis. We have chosen to span this region with a separate NURBS volume, whereas they have opted for a hybrid representation, filling this region with 21 tetrahedra. Our work has the disadvantage of possible parametric distortion as the volume elements map to a one-dimensional figure. Theirs must deal with communicating values amongst two representations, with the possible introduction of errors or loss in precision. Our volume completion operator amounts to a parametrization of the volume bounded by a boundary representation. A number of authors have dealt with the problem of parametrization, especially as it relates to surfaces. Two excellent overviews are provided by [80, 81]. The generalized cylinder (GC) representation was a starting point for our work. The definition of the medial axis as the locus of the centers of spheres that touch the boundary at two or more points suggests a method whereby the original boundary can be reconstructed by tracking a scaling function along the axis and centering a scaled sphere at that point. Our intuition of an offset-based approach to a simple midstructure was borne from this intuition. The GC-based representation was originally proposed by [82] and was recently extended by Chuang et al. [83] to their potential-based approach. An equally evocative idea was given by Blum's original work [60] that defined the medial axis as the quench points of fires placed on a boundary. 2.2.6 Global Illumination In the area of photorealistic rendering, one often speaks of radiance, the function that characterizes the light leaving the surface of an object in a particular direction. Capturing this radiance function facilitates solution of the global illumination equation. Most approaches to radiance computation and storage have simplified the problem by either decreasing the dimensionality of the radiance function or assuming a simplified surface representation. The original radiosity approach to the problem captured view-independent irradiance at discrete points over piecewise-planar patches [11]. Troutman and Max [84] and Zatz [85] extended the basic radiosity approach to higher order functions of surface position. Ward's work on illuminance caching [86] allowed for nonuniform sampling of the diffuse radiance function over arbitrary surfaces, and the work of Greger et al. [87] generalized the representation to volumes, allowing for scenes with dynamic elements. Walter et al. [88] trace photons from light sources into the environment, and track the intersections of these packets to reconstruct the surface illumination function. Their method applies to polygonal models and does not render view-dependent shading effects (although it does capture view-dependent illumination effects). Photon maps [89] further generalize the approaches of Ward and Walter et al. by caching illumination as discrete oriented packets. At render time, the global illumination integral can be approximated at a surface point by considering the closest k packets and performing quadrature. In this way, arbitrary surfaces and a 4D radiance function 22 are supported. Difficulties include ensuring adequate sampling (thereby avoiding blurring) and determining the k significant neighbors. A number of techniques for caching angularly varying radiance have been introduced. Walter et al. [90] employ Phong lobes as a basis for capturing nondiffuse lighting effects. The radiance function of each surface in the scene is approximated using virtual lights and Phong lobes. Hardware support for interpolated shading enables interactive walkthroughs of these scenes. However, the technique requires the results of a prior global illumination solution, has limited flexibility in the basis functions (e.g., fixed, per-surface Phong exponents), and is restricted to polygons (GL shader). Another approach is to assume that the rendering equation can be factored into a sum-of-products of lower dimensional functions [91]. McCool et al. [92] apply factorization to the BRDF, allowing direct use of texture mapping hardware in conjunction with point source-based lighting. Latta and Kolb [93] apply the factorization to the entire GI integral. Some limitations of their approach are the assumption of isotropic materials and a distant environment (environment map). Object-object interactions are not treated, shadowing is due only to self-shadowing, and self-lighting is not accounted for. Alternatively, spherical harmonics (SH) are a common representation for capturing directional radiance. Sillion et al. [12] generalize the basic radiosity approach to cache radiance as spherical harmonics coefficients at the vertices of planar surfaces. Surface colors are then determined by interpolation at render time. Stamminger et al. [94] extend this work to a multiresolution representation of the angular radiance. Cabral et al. [95] represent the surface reflectance (BRDF) in a 2D spherical harmonics representation by assuming isotropy and discretizing the remaining dimension. They assume an infinitely distant environment (environment map) which can be projected onto the SH basis, reducing the global illumination integral to a sum of coefficients. Sloan et al. [96] and Kautz et al. [97] take a similar approach, extending the model to include dynamic lighting, self-illumination, and arbitrary BRDFs. The environment is generally assumed to be distant, except for distinguished points in a local neighborhood of the object. Ramamoorthi and Hanrahan [98] have demonstrated that for diffuse materials and distant environments, 9 SH coefficients are sufficient to encode incident illumination. Similar approaches for capturing surface radiance have been applied in the field of image-based rendering. Wood et al. [99] associate a piecewise linear 2D directional radiance map with each texel on the surface, allowing for real-time playback of photographic imagery. An idea related to the one we present was introduced by Malzbender et al. in their paper ''Polynomial Texture Maps" [100]. Each texel stores 6 coefficients to a biquadratic function that approximates 23 surface luminance variation with respect to lighting direction. The model assumes directional lighting and a fixed viewing direction to reduce the dimensionality of the radiance function. Several authors have explored hierarchical approaches to radiant transfer. Hanrahan etal. [101] develop a technique for patch-based energy exchange at multiple scales, thereby decreasing the number of interactions required in a radiosity solution. Their basic technique is limited to diffuse polygonal environments. Aupperle and Hanrahan [102] generalize the hierarchical technique to account for glossy reflections, assuming polygonal environments and that illumination is constant over a patch. The work of Gortler et al. [103] generalizes the Galerkin and hierarchical methods by introducing the multiscale wavelet basis to radiosity. Their method treats diffuse scenes, does not enforce continuity across patch boundaries (which can result in visible seams), and is implemented solely for polygonal environments (although they hint at an implementation for parametric patches). Yu and Peng [104] apply spline wavelets as a basis for representing irradiance at multiple scales over curved surfaces. Christensen et al. [105] demonstrate how wavelets and importance-driven refinement can be used to accelerate the radiosity solution for glossy environments. In contrast to our work, their approach is view dependent, requires a final gather, applies numerical integration in addition to approximation of the GI integrand, and is limited to convex patches. Bala et al. [106] introduce a bounded-error quadtree-based method for caching directional radiance on surfaces. The radiance function is generated using lazy evaluation, and whenever possible, colors are interpolated from previously cached values. It is an impressive system, but necessarily makes some concessions for the sake of efficiency and simplicity. In particular, it uses convex primitives, linear interpolation of radiance, and does not support general BRDFs, diffuse interreflections, or area light sources. The approach we describe is most similar in spirit to that of Redner et al. [107]. Their paper develops the theory of B-spline density estimators and applies the resulting formulation to represent illumination functions across smooth surfaces. Our work differs from theirs in significant ways. Our radiance representation is directionally varying, incorporates arbitrary luminaires, addresses trimming curves, and provides for a hierarchical representation of radiance. On a more philosophical note, their work has the flavor of a radiosity formulation, with explicit projection onto basis functions. We take an approximation theoretic approach, and utilize the integration properties of B-splines to simplify the solution of the rendering equation. CHAPTER 3 RAY TRACING TRIMMED NURBS SURFACES 3.1 Overview A system is presented for ray tracing trimmed NURBS surfaces. While approaches to components are drawn largely from existing literature, their combination within a single framework is novel. This chapter also distinguishes itself from prior work in that the details of an efficient implementation are fleshed out. Throughout, emphasis is placed on practical methods suitable to implementation in general ray tracing programs. 3.2 Introduction The modeling community has embraced trimmed NURBS as a primitive of choice. The result has been a rapid proliferation in the number of models utilizing this representation. At the same time, ray tracing has become a popular method for generating computer graphics images of geometric models. Surprisingly, most ray tracing programs do not support the direct use of untessellated trimmed NURBS surfaces. The direct use of untessellated NURBS is desirable because tessellated models increase memory use which can be detrimental to runtime efficiency on modern architectures. In addition, tessellating models can result in visual artifacts, particularly in models with transparent components. Although several methods of generating ray-NURBS intersections have appeared in the literature [19-21,23-27], widespread adoption into ray tracing programs has not occurred. We believe this lack of acceptance stems from both the intrinsic algebraic complexity of these methods, and from the lack of emphasis in the literature on clean and efficient implementation. We present a new algorithm for ray-NURBS intersection that addresses these issues. The algorithm modifies approaches already in the literature to attain efficiency and ease of implementation. Our approach is outlined in Figure 3.1. We create a set of boxes that bound the underlying surface over a given parametric range. The ray is tested for intersection with these boxes, and for a particular box that is hit, a parametric value within the box is used to initiate root-finding. The key issues are determining which boxes to use, how to efficiently manage computing intersections 25 Figure 3.1: The basic method we use to find the intersection of a ray and a parametric object shown in a 2D example. Left: The ray is tested against a series of axis-aligned bounding boxes. Right: For each box hit, an initial guess is generated in the parametric interval the box bounds. Root-finding is then iteratively applied until a convergence or a divergence criterion is met. with them, how to do the root-finding, and how to efficiently evaluate the geometry for a given parametric value. We use refinement to generate the bounding volume hierarchy, which results in a shallower tree depth than other subdivision-based methods. We also use an efficient refinement-based point evaluation method to speed root-finding. These choices turn out to be both reasonable to implement and efficient. In Section 3.3, we present the bulk of our method, in particular how to create a hierarchy of bounding boxes and how to perform root-finding within a single box to compute an intersection with an untrimmed NURBS surface. In Section 3.4, we describe how to extend the method to trimmed NURBS surfaces. Finally, in Section 3.5, we show some results from our implementation of the algorithm. 3.3 Ray Tracing NURBS In ray tracing a surface, we pose the question "At what points does a ray intersect the surface?" We define a ray as having an origin and a unit direction r (t) = o + d * t. A nonuniform rational B-spline (NURBS) surface can be formulated as M -1 N -1 Sw(u,v) = £ (u)Bhkv(v) i=0 j =0 26 where the superscript w denotes that our formulation produces a point in rational four space, which must be normalized by the homogeneous coordinate prior to display. The {PWj } = j =0- 1 are the control points (wj j x j j , wj j yj j , wj j z j j , wjj ) of the M x N mesh, having basis functions Bj , ku, Bj,kv of orders ku and kv defined over knot vectors TU - {uj }j=0 Tv - {Vj }MM=- 1+kv j=0 formulated as Bj , ku (u) = < Bj , kv (v) = < Uj + ku-1 Uj U.j + ku~U Ujjrku ~ujJr 1 vi+kv -1 vi Vj+ky-V vi-\-kv vi-\-1 1 0 B j,ku-1(u) + Bj+i,ku-i(u) 1 0 ■Bi>kv- i(v) + B j+1, kv -1(v) if ku - 1 and u € [uj ,Uj+1) if ku - 1 and u € [uj,Uj+1) otherwise if kv - 1 and v € [vi, vj+ 1) if kv - 1 and v € [vj , vj+1) otherwise. (NB: For historical reasons, we refer to the "order" ku or kv of a surface in the u or v directions, respectively. Recently, it has been more commonplace to speak of the degree du or dv. In this case, du = ku - 1 and dv = kv - 1.) Such a surface S is defined over the domain [uku-1,u N) x [vkv-1,vM). Each nonempty subinterval [uj, u j+1) x [vj , vj+1) corresponds to a surface patch. In this discussion, we assume that the reader has a basic familiarity with B-Splines. For further introduction, please refer to [5,108-111]. Following the development by Kajiya [22], we rewrite the ray r as the intersection of two planes (Figure 3.2), {p | P i ■ (p, 1 ) - 0} and {p | P 2 ■ (p, 1) - 0}, where P i - (N i , d1) and P 2 - (N 2,d 2). The normal to the first plane is defined as N i (dy , -d x , 0) if |dx | > |d y | and |d x | > |dz| (0, d z , -d y) otherwise. u-uj v-v 27 Figure 3.2: A ray formulated as the intersection of two planes. Thus, N i is always perpendicular to the ray direction d , as desired. N 2 is simply N 2 = N 1 x d . Since both planes contain the origin o, it must be the case that P 1 ■ (o, 1) = P 2 ■ (o, 1) = 0. Thus, di = -N 1 ■ o d2 = -N 2 ■ o. An intersection point on the surface S must satisfy the conditions P 1 ■ (S(u, v), 1) = 0 P 2 ■ (S(u, v), 1) = 0. The resulting implicit equations can be solved for u and v using numerical methods. Ray tracing a NURBS surface proceeds in a series of steps. As a preprocess, the control mesh is flattened using refinement. There are several reasons for this. For Newton to converge quadratically, our initial guess for the root (u*,v*) must be close. By refining the mesh, we can bound the various subpatches, and use the bounding volume hierarchy (BVH) both to cull the rays, and also to narrow the prospective parametric domain and so yield a good initial root estimate. It is important to note that the refined mesh does not persist in memory. It is used to 28 generate the BVH and then is discarded. During the intersection process, if we reach a leaf of the BVH, we apply traditional numerical root finding to the implicit equations above. The result will determine either a single (u*,v*) value or that no root exists. In the sections that follow, we discuss the details of flattening, generating the BVH, root finding, evaluation, and partial refinement. Together. these are all that is needed for computing ray-NURBS intersections. 3.3.1 Flattening For Newton to converge both swiftly and reliably, the initial guess must be suitably close to the actual root. We employ a flattening procedure - i.e., refining/subdividing the control mesh so that each span meets some flatness criteria - both to ensure that the initial guess is a good one, and for the purpose of generating a bounding volume hierarchy for ray culling. A wealth of research exists on polygonization of spline surfaces, e.g., [14-17], and for the most part, these approaches can be readily applied to the problem of spline flattening. Some differences merit discussion. First, in the case of finding the numerical ray-spline intersection, we are not so much interested in flatness as in guaranteeing that there are not multiple roots within a leaf node of the bounding volume hierarchy. We note that this guarantee cannot always be made, particularly for nodes which contain silhouettes according to the ray source. Fortunately, the convergence problems which these boundary cases entail can also be improved with the mesh flattening we prescribe. We would also like to avoid any local maxima and minima that would serve to delay or, worse yet, prevent the convergence of our scheme. The flatness testing utilized by tessellation routines can be used to prevent these situations. As ray tracing splines is at the outset a complicated task, we recommend the application of as simple a flattening procedure as possible. We have examined two flattening techniques in detail. The first of these is an adaptive subdivision scheme given by Peterson in [16]. As the source for the Graphics Gems is publicly available, we will not discuss that method here, but instead refer the reader to the source. The second approach we have considered is curvature-based refinement of the knot vectors. The number of knots to add to a knot interval is based on a simple heuristic which we now present. Suppose we have a B-spline curve c(t). An oracle for determining the extent to which the span [tj, tj+i) should be refined is given by the product of its maximum curvature and its length over that span. Long curve segments should be divided in order to ensure that the initial guess for the numerical solver is reasonably close to the actual root. High curvature regions should be 29 split to avoid multiple roots. As we are utilizing maximum curvature as a measure of flatness, our heuristic will be overly conservative for curves other than circles. The heuristic value for the number of knots to add to the current span is given by n i = Ci * max {curvature(c(t))} * arclen(c(t))[ti,t ). 1) We also choose to bound the deviation of the curve from its linear approximation. This notion will imbue our heuristic with scale dependence. Thus, for example, large circles will be broken into more pieces than small circles. Erring again on the conservative side, suppose our curve span is a circle with radius r, which we are approximating with linear segments. A measure of the accuracy of the approximation can be phrased in terms of a chord height h which gives the maximum deviation of the facets from the circle. Observing Figure 3.3, it can be seen that h = r - d 6 = r - r cos - 62 * K i - ( i - y ) ) rO2 ~ I T ' The number of segments n 2 required to produce a curve within this tolerance is computed by Figure 3.3: Illustration of the chord height tolerance heuristic. 30 2n/0. Thus, we have n2 = 2n 27t -\fr Combining the preceding oracles for curve behavior, our heuristic for the number of knots n to add to an interval will be n 1 * n 2: n = C * max {curvature(c(t))} * arclen(c(t))3/2t . ^ [ti,ti+1) t . where C allows the user to control the fineness. Since the maximum curvature and the arc length are in general hard to come by, we will estimate their values. The arc length of c over the interval is given by f ti+1 / |c' (t)|dt = avg[ti,ti+i){|c/(t)|} * (ti+1 - t i) . J t. Curvature is defined as curvature(c(t)) = |c//(t) X c/(t)| < |c/(t)|3 Ic//( t) ||c/(t)||s in ^ | |c '(t)|3 |c"(f)|| s in 0| |c '(t)|2 |c//(t)| |c/ (t)|2 We make the simplification curvature(c(i)) « lc"(t) I In general, this estimate of the curvature will be overstated. The error will be on the side of refining too finely rather than not finely enough, so it is an acceptable trade-off to get the speed of computing second derivatives instead of curvature. 31 We are interested in the maximum curvature over the interval [tj , t j+1) r + max |curvatu re(f c(i) j ~ m a x Mi ti + i )/{ l c " C O I } J " avg[t.;t.+1) { |c/(t)|}2 If we assume the curve is polynomial, then the first derivative restricted to the interval [tj , tj+1) is given by c'(*>= e (* - 1)(Pj: f j- i )B , ^ 1w j•= j• -k7+i O2 tjJ +k- 1 tjJ where { P j} are the control points of the curve. The derivative of a rational curve is considerably more complicated. While the polynomial formulation of the derivative is in general a poor approximation to the rational derivative, in the case of flattening, we have obtained reasonable results when applying the former to rational curves. For the remainder of this section, we shall assume that rational control points have been projected into Euclidean 3-space. Since ^ Bj,k-1(t) = 1, we can approximate the average velocity by averaging the control points Vj of the derivative curve: j= j-k +2 j + j j=i-k+2 where Vj (k - I ) 1 ■ The average speed is therefore 1 j avgM i+ l){|c'(t)|} « IVj I-j= j-k +2 The second derivative over [tj , t j+1) is given by c " ( i ) = E ( f c - 2) |V] V^ , l l% t -2 ( i ) = E AJBM - 2(t) j =j-k+3 tJj +k-2 - tJj j=j-k+3 where Aj (k - 2) V-'_/' . Using the convex hull property again, the maximum magnitude 32 of the second derivative is approximated by the maximum magnitude of the second derivative curve control points A j: max {!c//(t)!) « . ,m ax ,{!Aj !)- [ti,ti+l) i-k+3<j<i Our heuristic is finally: _ r max[ti,ti+i){lc"(*)ll * [avg[ti,ti+i){|c/(f)|} * (tj+i - t j ) } 3/ 2 71 ~ * avgM i+ l){ |c '(t)|}2 max[t. t i+l){|c"(t)|} * (ti+1 - t i ) f / 2 = ( y ------------------------------------------------- avg^^odc'WI}1/2 _ ^ max^_A.+3<j<^{|Aj|}(i^+1 - U ) ^ 2 - O - _• , . For each row of the mesh, we apply the above heuristic to calculate how many knots need to be added to each u knot interval, the final number being the maximum across all rows. This process is repeated for each column in order to refine the v knot vector. The inserted knots are spaced uniformly within the existing knot intervals. As a final step in the flattening routine, we transform all of the knot intervals in the refined knot vectors t u and t v into intervals with "open" end conditions. By this, we mean that we give multiplicity ku - 1 to each internal knot of t u and multiplicity ku to each end knot. Similarly, we give multiplicities of kv - 1 and kv to the internal and external knots, respectively, of t v . The results are Bezier surface patches that correspond to each nonempty interval [ul , u l+ i ) x [vj, vj +i ) of t u x t v , each of which can be bounded using the convex hull of the corresponding refined surface mesh points. This becomes critical in the next section. The refined knot vectors determine the refinement matrix used to transform the existing mesh into the refined mesh. There are many techniques for generating this "alpha matrix." As it is not critical that this be fast, we refer the reader to several sources [109,112-114]. Both adaptive subdivision and curvature-based refinement should yield acceptable results. Both allow the user to adjust the resulting flatness via a simple intuitive parameter. We have preferred the latter mainly because it produces its result in a single pass, without the creation of unnecessary intermediate points. Adaptive subdivision does have the advantage of inserting one knot value at a time, so one does not necessarily need to implement the full machinery of the Oslo algorithm [113]. Instead, one can opt for a simpler approach, such as that of Boehm [112]. It 33 is not clear which method produces more optimal meshes in general. On the one hand, adaptive subdivision computes intermediate results which it then inspects to determine where additional subdivision is required. On the other hand, our method utilizes refinement (we only subdivide in the last step), and this converges more swiftly to the underlying surface than does subdivision. Neither technique is entirely satisfactory. Each considers the various parametric directions independently, while subdivision and refinement clearly impact both directions. The curvature-based refinement method refines a knot interval without considering the impact of that refinement on neighboring intervals. This can lead to unnecessary refinement. Neither makes any attempt to find optimal placement of inserted knots. The adaptive subdivision and curvature-based refinement methods are the products of the inevitable compromise between refinement speed and quality. Both satisfy the efficiency and accuracy demands of the problem at hand. A point which we do not wish to sweep under the carpet is that the selection of the flatness parameter is empirical and left to the user. As this parameter directly impacts the convergence of the root finding process, it should be carefully chosen. Too small a value may cause the numerical solver to fail to converge or to converge to one of several roots in the given parametric interval. This effect will probably be most noticeable along silhouette edges and patch boundaries. On the other hand, too large a value will result in over-refinement of the surface, leading to a deeper bounding volume hierarchy, and therefore, potentially more time per ray. We have found that after some experimentation, one develops an intuition for the sorts of parameters which work for a surface. For an example of a system which guarantees convergence without user intervention, see Toth [26]. This guarantee is made at the price of linear convergence in the root finding procedure. 3.3.2 Bounding Volume Hierarchy We build a bounding volume hierarchy using the points of the refined control mesh we found in the previous section. The root and internal nodes of the tree will contain simple primitives which bound portions of the underlying surface. The leaves of the tree are special objects, which we call interval objects, and are used to provide an initial guess (in our case, the midpoint of the bracketing parametric interval) to the Newton iteration. We will now examine the specifics in more detail. The convex hull property of B-spline surfaces guarantees that the surface is contained in the convex hull of its control mesh. As a result, any convex objects which bound the mesh will bound the underlying surface. We can actually make a stronger claim; because we converted our knot 34 vectors into "open" knot vectors in the last section (made the multiplicity of the internal knots k - 1 ), each nonempty interval [ui , ui+1) x [vj ,v j+1) corresponds to a surface patch which is completely contained in the convex hull of its corresponding mesh points. Thus, if we produce bounding volumes for each of these intervals, we will have completely enclosed the surface (refer to Figure 3.4.) We form the tree by sorting the volumes according to the axis direction which has greatest extent across the bounding volumes, splitting the data in half, and repeating the process. There remains the dilemma of which primitive to use as a bounding volume. Many different objects have been tried, including spheres [21], axis-aligned boxes [21,25,27], oriented boxes [21], and parallelepipeds [19]. There is generally a tradeoff between speed of intersection and tightness of fit. The analysis is further complicated by the fact that bounding volume performance depends on the type of scene being rendered. We have preferred simplicity, narrowing our choice to spheres and axis-aligned boxes. Spheres have a very fast intersection test. However, spheres, by definition, can never be flat (see Figure 3.4). Since our intersection routines require surfaces which are locally "flat," spheres did not seem to be a natural choice. Axis-aligned boxes have many advantages. First, they can become flat (at least along axis directions), so they can provide a tighter fit than spheres. The union of two axis-aligned boxes is easily computed. This computation is necessary when building the BVH from the leaves. Figure 3.4: Convergence of the bounding volume hierarchy under 4-to-1 subdivision. The top figures show the parametric intervals. The bottom figures show the spheres bounding the corresponding control meshes. 35 With many other bounding volumes, the leaves of the subtree must be examined individually to produce reasonable bounding volumes. Finally, many scenes are axis-aligned, especially in the case of architectural walkthroughs. Axis-aligned boxes are nearly ideal in this circumstance. A simple ray-box intersection routine is intuitive, and so we omit its discussion. An optimized version can be found in the paper by Smits [115]. 3.3.3 Root Finding Given a ray as the intersection of planes P 1 - (N 1, d1) and P 2 - (N 2, d2), our task is to solve for the roots (u*, v*) of A variety of numerical methods can be applied to the problem. An excellent reference for these techniques is [116, pp 347-393]. We use Newton's method for several reasons. First, it converges quadratically if the initial guess is close, which we ensure by constructing a bounding volume hierarchy. Furthermore, the surface derivatives exist and are calculated at cost comparable to that of surface evaluation. This means that there is likely little computational advantage to utilizing approximate derivative methods such as Broyden. Newton's method is built from a truncated Taylor's series. Our iteration takes the form un+1 vn+1 J 1(un,vn) * F (u n ,v n ) where J is the Jacobian matrix of F, defined as J - (F u , F v ). F u and F v are the vectors N i ■ Su(u,v) N 2 ■ Su(u,v) N 1 ■ Sv (u, v) N 2 ■ Sv (u, v) 36 The inverse of the Jacobian is calculated using a result from linear algebra: r- 1 adj(J) d e t ( J ) ' The adjoint adj(J) is equal to the transpose of the cofactor matrix C = C 11 C12 C 21 C 22 where C j = (-1)i+ jd e t ( J j ) and J ij is the submatrix of J which remains when the ith row and jth column are removed. We find that ad j(J) = ( -JJ 2 1 JJ 1112 We use four criteria, drawn from Yang [27], to decide when to terminate the Newton iteration. The first condition is our success criterion: if we are closer to the root than some predetermined e ||F(Un,Vn)|| < e then we report a hit. Otherwise, we continue the iteration. The other three criteria are failure criteria, meaning that if they are met, we terminate the iteration and report a miss. We do not allow the new (u*, v*) estimate to take us farther from the root than the previous one: ||F(Un+1,Vn+1)|| > ||F(Un,v„)||. We also do not allow the iteration to take us outside the parametric domain of the surface: U £ [Uku- 1 ,UN),v £ [vkv- 1 ,vm )• We limit the number of iterations allowed for convergence: iter > M A X I T E R . 37 We set M A X IT E R around 7, but the average number of iterations needed to produce convergence is 2 or 3 in practice. A final check is made to assure that the Jacobian J is not singular. While this would seem to be a rare occurrence in theory, we have encountered this problem in practice. In the situation where J (u k, vk) is singular, either the surface is not regular (Su x Sv - 0) or the ray is parallel to a silhouette ray at the point S(uk, vk) . (A proof of this assertion is given in the Appendix.) In either situation, to determine singularity, we test | d e t(J )| < e. If the Jacobian is singular, we perform a jittered perturbation of the parametric evaluation point, f u fc+1 \ / u fc \ + 1 V drand48() * (uo - u fc) \ v vk+1 J V vk ) ' V drand48() * (vo - vfc) ) and initiate the next iteration. This operation tends to push the iteration away from problem regions without leaving the basin of convergence. Because any root (u*,v*) produced by the Newton iteration is approximate, it will almost definitely not lie along the ray r - o + t * d . I n order to obtain the closest point along r to the approximate root (u*, v*), we perform a projection t - (P - o) ■ d . The approximate nature of the convergence also impacts other parts of the ray tracing system. Often, a tolerance e is defined to determine the minimum distance a ray can travel before reporting an intersection. This prevents self-intersections due to errors in numerical calculation. The potential for error is larger in the case of numerical spline intersection than, say, ray-polygon intersection. Thus, the tolerances will need to be adjusted accordingly. Failure to make this adjustment will result in "surface acne" [117] (see Figure 3.5). An enhanced method for abating acne would test the normal at points less than e along the ray to determine whether these points were on the originating surface. Unfortunately, we have found that we cannot rely on modeling programs to produce consistently oriented surfaces. Therefore, our system utilizes the coarser e condition above. 38 Figure 3.5: Left: Failure to adjust tolerances may result in surface acne. Right: Properly adjusted tolerances solve the problem. Black regions result from truncated ray depth. The Newton iteration requires us to perform surface and derivative evaluation at a point (u, v) on the surface. In this section, we examine how this can be accomplished efficiently. Prior work on efficient evaluation can be found in [114,118]. We begin by examining the problem in the context of B-spline curves. We then generalize the result to surfaces. The development parallels that found in [119]. We evaluate a curve c(t) by using refinement to stack k - 1 knots (where k is the order of the curve) at the desired parameter value t*. The refined curve is defined over a new knot vector t with basis functions Nl k(t) and new control points wlD l. Recall the recurrence for the B-Spline basis functions: Let t* € [tM , tM+1). As a result of refinement, t* = tM = . . . = t^-k+2. According to the definition of the basis functions, NM)1 (t*) = 1. There are only two potentially nonzero basis functions of order k = 2, namely, those dependent on NM)1: NM)2 and NM-1)2. From the recurrence, 3.3.4 Evaluation Nl,k(t) 1 0 if k = 1 and t € [tl , tl+ 1) if k = 1 and t € [tl , tl+1) N ^ 2(U) = U ^ N„i (U)+ U+k U N,l+hl(U) t ^ + k t /J.+ 1 = 0 39 and, tp+k-2 tp-1 t Ju+k- 1 tp = 0 * 0 + 1 * 1 = 1 . Likewise, the only nonzero order k = 3 terms will be those dependent on NM-1)2: Np-1,3 and N p-2,3. t* t p- 1 N p - z M = ( • • • ) * ( )+ tfl- 2+k U *1 t p - 2+k - fy- 1 = 1 The pattern that emerges is that NM-k+1 k(t*) = 1. A straightforward consequence of this result is /. \ _ Z^iJV i , k \ i ^ 1 _ tJJv-k+1*Jp -k +1 _ -p. C [t*) -U/i-k + 1 - / /i Ni,k(t*)Wi W^-k+1 The point with index ^ - k + 1 in the refined control polygon yields the point on the curve. A further analysis can be used to yield the derivative. Given a rational curve = E ^ A M ^ P i = = o ( t ) Y , iNi,k{t)Ui Y , iNi,k{t)Wi u ( t ) ' where D " = wiD 1, the derivative is given by the quotient rule w(t)(D" )'(t) - D " (t)w'(t) W ( t ) 2 0 40 By the preceding analysis, D r (t*) - D "_ k+1. Likewise, w(t*) - _ k+1. The derivative of the B-Spline basis function is given by Ni' k(t) - (k - 1) Ni, k_1(t) Ni+1,k_1(t) ti+k_1 - t i ti+k - ti+1 Evaluating the derivative at t*, we have (Dr )'(t*) - ] T N , k(t* )D; (k - 1^ D r Ni , k_1(t*) Ni+1 , k_1(t*) t i+k_1 - t i ti+k - t i+1 - (k - 1) t i+k - t i+1 We know the only nonzero basis function of order k - 1 is N^_k+2 ,k_ 1(t*) - 1. Therefore, (Dr )'(t*) - (k - 1 ) DD wM_k+2 DD wM_k+1 .^ + 1 - t Ju_k+2 t^+1 - t ^_k+2_ Analogously, w'(t*) - (k - 1 ) Wu_k+2 .^ + 1 - t ^_k+2 WjU_k+1 t^+1 - t ^_k+2_ Plugging in for c' (t) c'(t*) - (k - 1) _DW /li -k + 2 /_t -k + 1 ^ + 1 t^ - k + 2 WM_k+ 1- D r _k+1 ^M-fc+2~ rM-fc+i ^ + 1 t^-k + 2 2 ^_k+1 k 1 (tjU+1 t Ju_k+2)wJu_k+1 ________ f c - - l ________ (tjU+1 t Ju_k+2)wJu_k+1 (k - 1)w^_k+2 DD r^ _k+2 - DD r^_k+1 W/x-k+2 w^_k+1 w^_k+2D ^_k+2 - w^_k+1D ^_k+1 W^i-fc+2 w^_k+ 1_ (t^+1 t * )w^_k+1 [D ^_k+2 - D ^_k+1] - The result for surface evaluation follows directly from the curve derivation, due to the inde- 41 pendence of the parameters in the tensor product, so we shall simply state the results: S (u* , v* ) = D iv-kv+1 ,iu-ku+1 (ku - 1 )uj^v- k v+\ Vu+1 - u *)wiv - kv (kv 1)^iv - kv+2,iu -ku+1 o \ __ (fcM l ) u i n v - k v + l , f j , u - ku + 2 |-[-^ -p . l u \ ^ * i J / \ [■u /U v - k v + l , /U u - k u + 2 L iv - k v + l , ^ u - k u +lj (Ulu + U*)^lv + 1,lu-ku + 1 O \ __ v ^_______ 7 Llv -Kv - \ -z,Llu - ^ u - r l | p t 1 &-v\U*,V*J , , [-D/iv -kv+2,/iu-ku+l -kv+l,/iu -ku+lj • (vlv + 1 v*)^lv - kv + 1,lu-ku+1 The normal n(u, v) is given by the cross product of the first order partials: n(u*,v*) = Su(u*,v*) x Sv(u*,v*). If the surface is not regular (i.e., Su x Sv = 0), then our computation may erroneously generate a zero surface normal. We avoid these problem areas in our numerical solver by perturbing the parametric points (see Section 3.3.3). 3.3.5 Partial Refinement We still need to explain how to calculate the points in the refined mesh so that we can evaluate surface points and derivatives. What follows is drawn directly from Lyche et al. [114], tailored to our specialized needs. We again formulate our solution in the context of curves, and then generalize the result to surfaces. Earlier, we proposed to evaluate the curve c at t* by stacking k - 1 1*-valued knots in its knot vector t to generate the refined knot vector t. The B-spline basis transformation defined by this refinement yields a matrix A which can be used to calculate the refined control polygon D w from the original polygon P w: D w = A P w. We are not interested in calculating the full alpha matrix A, but merely rows ^ - k + 2 and ^ - k + 1 , as these are used to generate the points D^_ k+2 and D "_k+1 which are required for point and derivative evaluation. Suppose t* £ [t^ , Tl /+1) . We can generate the refinement for row ^ + k - 1 using a triangular 42 scheme %',0 ° y - 1,1 a ^',1 a / v,v a where v is the number of knots we are inserting and aj ,1 a = S3,P j,P+1 7?,p - 7j,pa j,p + (1 - Tj+1,p)a j+1,p (t* t m'- P+j-(k -1-v)) /d , if d TM/+1+j Tm'-P+j -(k -1-v) > 0 arbitrary otherwise. Am-k+1)j- - a j v for j - - v, ■ ■ ■ , and A i)j- - 0 otherwise. If n knots exist in the original knot vector t with value t*, then v - maxjfc - 1 - n, 1 } - that is to say, we always insert at least 1 knot. The quantity v is used in the triangular scheme above to allow one to skip those basis functions which are trivially 0 or 1 due to repeated knots. As a result of this triangular scheme, we generate basis functions in place and avoid redundant computation of a / values for subsequent |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6pn9mhv |



