This action might not be possible to undo. Are you sure you want to continue?
Project in Lieu of Thesis presented for the Masters of Science Degree The University of Tennessee, Knoxville
Ngozi Sherry Ali May 2005
I would like to first give thanks to Dr. M. Abidi for allowing me the opportunity to join the IRIS lab as a graduate research assistant. His support and patients has allowed me to complete one of my many life goals. Thanks for giving me the chance to earn a degree of higher education and to reach heights that I knew I could reach. Saying “thank you” will never be enough to repay you for all you have done. Again, I thank you for everything and I am very appreciative. I would like to thank Dr. Koschan for advising me on all my work and projects. I value the guidance that was giving to me. Thanks for being the best advisor a student could ask for. I would also like to thank Dr. Page for always being there when I needed. There was never a time when I was in need that you did not offer you assistance. The IRIS Lab has been my family since I first arrived in August of 2003, I thank all that is apart of the Lab. In closing, I would like to thank Vicki Courtney-Smith and Kim Cate for always making sure I had all the little things that I couldn’t get myself. Special thanks go to Rangan and Chris Kammerud for helping me with my research and class work. Thanks to my close friends Sharon Sparks and Syreeta Dickerson for helping me through my first year of school with the support and encouragement that they gave me. Lastly, thanks to my family for always supporting me and my endeavors. You all are the driving force in my life and career. Without your love, none of this would matter.
The automotive industry has an increasing need for the remanufacturing of spare parts through reverse engineering. In this project we will review the techniques of laser scanning and structured lighting for the reverse engineering of small automotive parts. Laser Range Scanning is the use of a CCD camera that captures the profile of the laser as it passes on an object. Structured light is the projection of a light pattern on an object also with the use of several cameras to obtain a profile of the object. The objective of the project is to be able to generate part-to-CAD and CAD-to-part reconstruction of the original for future usage. These newly created 3D models will be added to the IRIS 3D Part Database.
............................. 12 3 SURFACE RECONSTRUCTION ......3 Single Views and Data Segmentation............................................................................................................... 28 IVP Laser Range Data Results................................. 15 3..........................................................................ii TABLE OF CONTENTS 1 2 INTRODUCTION.................................................................. 8 Reverse Engineering Applying Non-Contact and Hybrid Techniques............................................................................................................................... 6 Reverse Engineering Applying Contact Techniques ........................................ 51 .......... 1 ACQUISITION CLASSIFICATIONS AND RELATED WORKS .................................................................... 15 Multiple View Integration and Registration .4 2.....................................1 4................3 Part Identification......................................................................1 5.................................................. 31 Genex Structured Light Data Results ..................................... 19 4........1 2................................................................................. 28 5...................2 5........ 21 Genex 3D FaceCam Profiling System ....... 49 REFERENCES...........................................1 5. 45 6 CONCLUSION ..................... 5 2...................2 IVP Range Scanning Profiling System ........................................................................... 17 Post-processing Registered Images.............3 2............... 21 4......... SYSTEM DESCRIPTIONS AND SETUP ............................1 3........................................................................... 7 Non-Contact Data Acquisition Techniques ...................................................2 2.5 Contact Data Acquisition Techniques ............ 10 General Constraints of Data Acquisition Techniques................... 36 Comparison: IVP Laser Range Scanner and Genex 3D FaceCam Data............ 25 5 RESULTS AND DISCUSSIONS .....2 3.......................
..............................8: Cleaned point cloud data of the side views of the water pump taken with the IVP Range Scanner.. (a) is the top view (b) is the bottom view ..3: Calibration grid for the IVP Range Scanner ..................... (a) right view........2: Textured single view range images generated from the structured lighting system............................... 23 Figure 4... 17 Figure 3.... Arrangement for the IVP Smart Vision Camera and Laser....... 22 Figure 4....................................................................................................................... 29 Figure 5......7: Reconstructed views of the water pump............... 33 Figure 5..........................5: Structured light grid pattern projected on the ramp with a neutral tan background.... (b) front view of the ramp........................... 24 Figure 4................. 30 Figure 5....................... (a) left/front side view................... (b) Sequenced single view range images of the bottom surface of the water pump generated using our laser range scanner .. (b) left view........2: The sequence of steps required for the reconstruction of a model from multiple overlapping scans..............2: Photos of the water pump part (a) top and side views (b) bottom view...... center and right camera photos of the object....... (b) side angled view of the top of the IVP Range Scanner.........3: Point cloud information of the side view of the water pump..................1: Photos of the ramp part (a) front and side views. (c) bottom view....... 19 Figure 4.......................5: 3D Ramp CAD models.....................4: CAD model images of the ramp (a) left/front view......1: (a) Original image of the water pump...... 35 ..................3: Photos of the pulley arm.............1: Classification of data acquisition techniques used in contact and non-contact approaches for reverse engineering systems...........6: Genex 3D FaceCam System................ (b) Solid mesh model......1: Flowchart for basic transformation phases of reverse engineering................... (b) back/left side view....4: IVP Range Scanner User Interface...................... (b) right side view...................6: Reconstructed bottom surface of the water pump..............1: Principle of a laser triangulation system................... 4 Figure 2... (d) front/ right view of the ramp...............................................7: Distance specification for data collection ..... (a) front view of the IVP Range Scanner........................ (c) registered range image of the two side views.............. (b) back and top views.....iii LIST OF FIGURES Figure 1..................................................................... (d) back view .......................... (c) smooth reconstructed mesh model........... 32 Figure 5..... 34 Figure 5.2: Equipment setup for IVP Range Scanning System. ........................................................ (a) Point cloud information................................................. 26 Figure 4..................8: Genex 3D FaceCam User Interface screen that displays the left................ 5 Figure 3.............. (a) left view of the ramp...................................... 31 Figure 5.................... (c) right view and (d) left view . 16 Figure 3................. 27 Figure 4...... 30 Figure 5........................ 2 Figure 1....................................................................... (a) and (b) side view range images............ 24 Figure 4...... 25 Figure 4............................................................. 27 Figure 5............ (c) back/left view of the ramp... (d) right/top/ front side view........... ....... (c) ramp measurements .......... (c) bottom view. (a) reconstructed top view (b) side view reconstruction....................
.....................................20: Standard deviation calculations of the IVP ramp overlapped with the Genex ramp.......................... (a) surface reconstruction with missing data and (b) complete surface reconstruction with filled holes .. (c) side view and (d) top view............. (b) right/ back side view.. (c) front view and (d) left view................17: Genex System reconstructed top surface of the water pump............................................................................... (b) back view................... 45 Figure 5........ 39 Figure 5............................ 39 Figure 5.................. ...... (c – f) Final 3D CAD model views of the water pump.....22: Simulated ramp............................... (b) solid mesh model.......... 37 Figure 5................ (d) left/angled view.......... 46 Figure 5.... 47 ..........10: Pulley Arm (a) photo of the side view profile of the pulley arm....................... (a) back view...... (c) left view and (d) front view ...............18: Genex System reconstructed views of the water pump........... (b) point cloud CAD model of the bottom side..12: Genex System reconstructed views of the water pump.................................................................................. 35 Figure 5......... (a) right view..........iv Figure 5...................... (e) front/angled view and (f) top/left angled view.........9: Point cloud model of the water pump (a) CAD model showing height variations (b) top view......................15: Textured water pump images using the Genex System.......................... 43 Figure 5... (b) view of the bottom surface...............................................................19: (a) and (b) Water pump photo image.... 44 Figure 5............... (b) left view........... 36 Figure 5....................................... ....................................... (b) point cloud CAD model of the top side ................................. (c) back view .............................. 41 Figure 5............................................................ This picture shows the measurements of the ramp........................................ (c) right/angled view........14: Water pump placed in box for neutral background..... (a) right view....21: Standard deviation of the IVP ramp overlapped with the Genex ramp model ..................... (a) top view.......................... unsmooth surface textured and (d) smooth textured surface... 38 Figure 5..13: Complete final 3D model of the ramp (a) front view (b) back view (c) side view (d) bottom view ........... 40 Figure 5............. 41 Figure 5.............................11: Textured ramp images using the Genex System..... (a) point cloud data............................16: Genex System reconstructed bottom surface of the water pump........................
47 Table 2………………………………………………………………………………….48 ...v LIST OF TABLES Table 1………………………………………………………………………………….
manufacture. This fast prototyping is done . It can also be defined as the process or duplicating an existing component by capturing the components physical dimensions. This method is more commonly referred to as Reverse Engineering. With this knowledge. computer vision requires a combination of low-level image processing to enhance the image quality (e. images to generate a 3D model of a scene or object. and systems”. This type of engineering is more commonly known as Forward Engineering. When we think of engineering we think of the general meaning of designing a product from a blue print or plan. Easy meaning how many times you will have to do the process or task. you do not use up valuable time in assembly or doing a specific task. Typically. Three-dimensional (3D) computer vision uses two-dimensional (2D).Chapter 1: Introduction 1 1 INTRODUCTION Engineering is a growing field that continues to evolve to suit the rapid changes of the 21st century. military branches and research facilities. and creates a CAD model. processes. medical industry. Engineering fields are constantly improving upon current designs and methods to make life simple and easier. Manufacturing industry utilizes reverse engineering for its fast rapid prototyping abilities and accuracy associated with the production of new parts. simple and easy can be directly related to fast and accurate. Computer vision is a computer process concerned with artificial intelligence and image processing of real world images. computer vision applications have been tailor to compete in the area of reverse engineering. remove noise. Reverse engineering is the opposite of forward engineering. increase contrast) and higher level pattern recognition and image understanding to recognize features present in the image. and operation of efficient and economical structures. Reverse engineering is usually undertaken in order to redesign the system for better maintainability or to produce a copy of a system without access to the design from which it was originally produced. Engineering  is described as “the application of scientific and mathematical principles to practical ends such as the design. When referring to technology. machines. for modification or reproduction to the design aspect of the product. Simple meaning that.g. An emerging engineering concept is utilizing forward engineering in a reverse way. There has been a mandatory need for 3D reconstruction of scenes and objects by the manufacturing industry. It takes an existing product.
1 show the format of how range image data is acquired. Data Capture 2. All 3D-based machine vision systems ultimately acquire and operate on image data. or other approaches. The output is transformed data that is represented as 3D reconstructions of geometric primitives. In addition to these tasks. This flowchart can be characterized as a generic basic principle for reverse engineering. We want to generate clean. robust image acquisition system that can acquire data with a high level of accuracy in a sufficient time frame. point detectors. Acquisition can be based on collecting the Z-axis data using linear area. 3D CAD Model Figure 1. There are several building blocks or steps. The steps shown often overlap during the process of each stage. The goal of reverse engineering an object is to successfully generate a 3D CAD model of an object that can be used for future modeling of parts where there exists no CAD model. Military branches also utilize reverse engineering to perform inspection task that are associated with safety. make decisions relating the data to the application without . 1.Chapter 1: Introduction 2 through the use of CAD model designs for inspection purposes. laser radar laser scanning techniques. listed in Figure 1. Data Segmentation 3. This requires a strong. One of our laboratory’s current focuses is reverse engineering or 3D reconstruction of objects and scenes from real world data. smooth 3D models. These systems incorporate the computer power to manage.1: Flowchart for basic transformation phases of reverse engineering There are many different approaches to acquiring 3D data of objects of various structural shapes. which determine the process of building a complete 3D model from range and intensity data. These steps. transformed and generated. process and analyze the data acquired. Our system uses range and intensity images of objects as input. which are free of noise and holes.
the steps taken may vary for each system. While the approaches are similar. smooth models of the part. Data segmentation and integration are discussed in depth in section 3. we will compare the modeling aspects of this system for reverse engineering of automotive parts to the laser range system. After all the steps are complete a final 3D CAD model is generated. These approaches have been successful for simple parts. outlier or erroneous background information is eliminated. In the data segmentation stage several steps are taken to generate noise free. This characterizes what is meant by the term ‘‘3D-based machine vision”. we will investigate the limitation of both systems. Traditional processes for reverse engineering of objects and structures from 3D datasets have been initial data (e. Although both systems primary focus is reconstruction of real world objects and scene. Our approaches will be using data acquisition systems that are fairly robust to noise and yield high accuracy measurements. Surface smoothing is an additional feature to eliminate noisy data and make the surface of the object more uniform in texture. The blue area best describes the data capturing section. Typical errors arise from noisy data or missing data from the surface of the part. . This can be improved through the integration of laser range scanning.g. quadratic surface) driven. the end user has a large array of commercial systems to select from. The first system is the IVP Range Laser Scanning System and the second is the Genex 3D FaceCam System. Outliers are false data points that are captured during acquisition. Other errors can also consist of incorrect relative positions of the object. which are coordinate measuring machines that have a touch probe to model the surface for inspection. To emphasis the use of this project we have chosen two different commercial systems with two different approaches to modeling 3D objects using vision based technology. Industries are looking for a method to improve upon these errors and migrated toward a fast efficient way of modeling parts for inspection purposes. while the yellow and orange highlights the data pre and postprocessing steps and the final outcome is a 3D CAD model.2 is a more detailed description of Figure 1.g. Although the structured lighting system is not designed for reverse engineering use.2 describes the data flow of our approach applied to both systems. In data reduction. Surface smoothing and multi-view registration are included in data integration. The two different techniques consist of using laser lighting and structured lighting techniques. Figure 1. When implementing a non-contact measurement solution. but have resulted in reconstructions that have errors when dealing with more complex structures.Chapter 1: Introduction 3 operator intervention. triangulated models) and parametric surface (e.1. Figure 1. data such as the noise. Today’s industries are moving toward the improvement of better accuracy and faster inspection time. Traditional practices use CMMs. This can be performed both before and/or after several views of the part are merged.
Following. Section 3 will discuss surface reconstruction for 3D models. Section 2 of this paper.2: The sequence of steps required for the reconstruction of a model from multiple overlapping scans. A comparison and summary of the limitations will be expressed in the conclusion section of the paper. Data Integration Post-processing Multi-view Registration Surface Smoothing Noise Filtering Hole Filling Next Best View Plan 4.Chapter 1: Introduction 4 1. will discuss the related merits and methods of reverse engineering and techniques. Finally. . Data Capture Acquisition of Range Images 2. is section 4 that discusses the applications and technique of our implemented systems. we will conclude this paper with the implementation of our procedure for the experimental results. 3D CAD Model Figure 1. Data Segmentation Pre-processing Data Reduction 3.
Data acquisition systems are constrained by physical considerations to acquire data from a limited region of an objects’ surface. Data Acquisition Methods Non-contact Methods Tactile Contact Methods Magnetic Acoustic Optical CMMs Robotic Arms Laser Triangulation Time-of-Flight Stereo Analysis Structured Lighting Interferometers Figure 2.1: Classification of data acquisition techniques used in contact and noncontact approaches for reverse engineering systems. the related merits and difficulties associated with these methods are discussed.1 classifies the types of application used for acquiring 3D data into contact and non-contact methods. multiple scans of the surface must be taken to completely measure a part. Therefore.Chapter 2: Acquisition Classifications and Related Works 5 2 ACQUISITION CLASSIFICATIONS AND RELATED WORKS An important part to reverse engineering is data acquisition. . After reviewing the most important measuring techniques. Figure 2.
as mentioned before. A 3-axis milling machine is an example of a mechanical or robotic arm. Sahoo and Menq  use tactile systems for sensing complex sculptured surfaces. but like the CMM. This needs to be done in conjunction with the idea that the part has free-forming surfaces. The reason being is if the surface texture is soft. The two most commonly known forms are Coordinate Measuring Machines (CMMs) and mechanical or robotic arms with a touch probe sensing device. The part might have indentions that are too small.Chapter 2: Acquisition Classifications and Related Works 6 2. There are many different other robotic devices which are used because of their ability to have less noise and have a desirable accuracy. The disadvantages of CMMs having contact to the surface of an object can damage the object. and used as a tactile measuring system. Sampling basis is the only way a part can be inspected using a CMM. The time needed to capture points one by one can range from days or sometimes weeks for complicated parts. There are disadvantages when using a CMM or robotic arm to model surfaces of parts. as shown in Figure 2.1. The main ones are temperature. geometric complexity increases the number of points required for accurate measurements. holes can be inflicted on the surface. These machines can be programmed to follow paths along a surface and collect very accurate. Flexibility of parts makes it very difficult to contact the surface with a touch probe without creating an indentation that detracts from the accuracy of the measurements. vibration and humidity .1 Contact Data Acquisition Techniques There are many different methods for acquiring shape data. For CMM. . Butler  provides a comparison of tactile methods and their performance. they are the slowest method for data acquisition. The ability for obtaining large amounts of point data from the parts surface quickly for complete inspection needs to be the number one quality of an inspection device. It is considered a contact type method that is NC-driven and can program sampling of points for predefined features efficiently . However. There are also external factors that affect the accuracy of a CMM. CMMs are often used when high precision is required. Tactile methods represent a popular approach to shape capture. These machines can be fitted with a touch probe. Xiong  gives an in depth discussion of measurement and profile error in tactile measurement. CMMs also show difficulties in measuring parts with free form surfaces. nearly noise-free data. it is not very effective for concave surfaces.
The geometric differences between the original and new generated model was computed.. rather than using triangulated meshes or parametric surfaces patches. even when the original 3D sensor data has substantial errors. The system designs a model composed of mechanical features from a set of 3D surface points that could be defined by users. the registration of two different point clouds is performed by matching three points to three points. Various individuals and groups have developed new techniques. The basic principles of reverse engineering was applied to the design and manufacturing of the die of a diesel engine. It assumes that the models are already generated into CAD model formation. The die’s geometric shape is measured and data is acquired using a CMM in conjunction with KUM measurement software that has a linear scan mode. The number of points measured is determined automatically by the CMM according to the curvature change of . parts were used only if they had the original CAD model. The results from Thompson et al. A non-contact digitizer measured the surface points. The process described in the paper is the object digitization and CAD model reconstruction to NC machining. It states the resulting models can be directly imported into featured-based CAD systems without loss of the semantics and topological information inherent in featured-based representations. the measurement data is acquired. This research claims to have advantages over the current practice of ordinary CMMs.Chapter 2: Acquisition Classifications and Related Works 7 2. They have identified a feature based approach that they state produces highly accurate models. Their method is geared toward reverse engineering of mechanical parts. which have been improvements to the current existing techniques available. The system employed is built with a coordinate measurement machine and CAD/CAM software. three spheres to three spheres. Their research describes a prototype of a reverse engineering system which uses manufacturing features as geometric primitives for mechanical parts. This research focuses on the engineering application of reverse engineering. or three planes to three planes . In the feature based approach. The first technique and method we visit is that of Thompson at el.. The second technique we visit is that of Yu Zhang .2 Reverse Engineering Applying Contact Techniques Reverse engineering is a growing industrial market for manufacturing and development. The part was machined out of aluminum using a 3-axis NC mill. Further details and results from their work can be read in reference .  for quantitatively evaluating the accuracy of the models using the feature-based modeling approach. Their main innovation was to use the features to fit scanned data. New CAD models were generated using the REFAB (Reverse Engineering – FeAture-Based) system. . By scanning the physical object.
Each method has strengths and weaknesses that require the data acquisition system to be carefully selected for the shape capture functionality desired. Laser Triangulation is a method. time-of-flight. The die is finally machined by the NC machine tool using the created CAD model. the format of the measured data is transformed into an acceptable format for the software used. the NC machining process planning can generate the location for cutting the manufacturing application. usually a video camera. senses the reflection of the surface and then by using geometric triangulation from the known angle and distances.Chapter 2: Acquisition Classifications and Related Works 8 the surface. Moss et al. structured lighting and stereo analysis. This is because they have relatively fast acquisition rates. A high-energy light source is focused and projected at a pre-specified angle at the surface of interest.. This is measured on the tactile point. an appropriate analysis must be performed to determine the positions of the points on the objects surface. The CMM that is used can measure about 1600 points for each scanned curve. Usually a machining tracing process results in a structured point of sequences with a large number of points and a line structure.  present a detailed discussion of a classic laser triangulation system used to capture shape data from facial surfaces. The accuracy is determined by the resolution of the photosensitive device and the distance between the surface and the scanner. The light source and the camera can be mounted on a traveling platform which then produces multiple scans of the surface. 2. Optical methods of shape capture are probably the broadest and growing in popularity over contact methods. These scans are therefore relative measurements of the surface of interest. Motavalli et al. which uses location and angles between light sources and photo sensing devices to deduce position. Various different highenergy light sources are used. The use of laser triangulation on a . The processed data is directly used for the creation of the die CAD model. Triangulation can acquire data at very fast rates. The result of Zhang’s  system is a selfdeveloped program to realize the transformation of the format of the measured data information from the CMM and KUM. Then the data is filtered out and processed in a visualized way.  presents a reverse engineering strategy using laser triangulation. There are five important categories of optical methods: laser triangulation. This section will discuss the various principles of each method. interferometers. sound or magnetic fields to acquire shape from objects.3 Non-Contact Data Acquisition Techniques Non-contact methods use light.. After the CAD model of the die is complete. First. A photosensitive device. but lasers are the most common. In the case of contact and non-contact. the position of a surface point relative to a reference plane can be calculated.
while most reverse engineering applications distances are in the centimeter to meter range. where an image is compared to a 3D model. in laser range finders. Another stereo image analysis approach deals with lighting models. a high-energy light source is used to provide both a beam of monochromatic light to probe the object and a reference beam for comparison with the reflected light. This method is often referred to as a passive method since no structured lighting is used. other parts of the electromagnetic spectrum could also be used. stereo pairs are used to provide enough information to determine height and coordinate position. This is similar to structured lighting methods in that frames are analyzed to determine coordinate data. A popular method of structured lighting is shadow Moire. This distance is proportional to the height of the surface at the point of interest and so the coordinates of surface points can be deduced. In principle. Measuring distance by sensing time-of-flight of the light beams emitted is the way a ranging system works. the analysis does not rely on projected patterns. but the analysis to determine positions of data can be rather complex. . Interferometer methods measure the distance in terms of wavelengths using interference patterns. This can be a very accurate method of measurement since visible light has a wavelength of the order of hundreds of nanometers. Wang and Aggarwal  use a similar approach but use stripes of light and multiple images. Jarvis  presents an in-depth article on time-of. For example. intensity patterns within images can be used to determine coordinate information. The article presents some information on accuracy and performance. These contour lines are captured in an image and are analyzed to determine distances between the lines. typically. Instead. The final optical shape capture method of interest is stereo image analysis. Moring et al. The image must then be analyzed to determine coordinates of data points on the surface. Structured lighting involves projecting patterns of light upon a surface of interest and capturing an image of the resulting pattern as reflected by the surface. where an interference pattern is projected onto a surface producing lighted contour lines. In practice. However. Practical methods are usually based on lasers and pulsating beams.flight range finders giving detailed results and analysis. Correlation of image pairs and landmarks within the images are big difficulties with this method and this is why active methods are preferred. Will and Pennington  use grids projected onto the surface of objects to determine point locations. and in stereo analysis the relative locations of landmarks in multiple images are related to position. Finally. These references give a broad survey of methods. the time-of-flight is used to determine the distance traveled.  describe a range finder based on time-of-flight calculations.Chapter 2: Acquisition Classifications and Related Works 9 coordinate measuring machine is presented by Modjarred . Active methods are distinguished from passive methods in that artificial light is used in the acquisition of data.. approaches and limitations of triangulation. Structured lighting can acquire large amounts of data with a single image frame. The model is modified until the shaded images match the real images of the object of interest.
Sonar is used extensively for this purpose. Acoustic interference or noise is often a problem as well as determining focused point locations. Magnetic touch probes are used which usually sense the location and orientation of a stylus within the field. They can also be a combination of NC coding and laser scanning techniques. where sound is reflected from a surface. Acoustic methods have been used for decades for distance measuring. Hybrid based applications will be discussed in the next section. They formed an optical non-contact scanning setup that works with the mathematical method of direct shape error analysis for engineering purposes. MRI (magnetic resonance) activates atoms in the material to be measured and then measures the response. They present a measurement system that includes the combination of two CCD cameras. all measuring methods must interact with the surface or internal material using some phenomenon. . Matching the images of the free-form surfaces with sufficient efficiency and accuracy is the final result. a line laser and a three-axis motion stage. magnetic. The second may consist of some other form of a non-contact technique such as software and laser-based technology integrated as one system. Hybrid modeling systems are a combination of contact and non-contact systems. The sensor type selected also determines the amount of analysis needed to compute the measured data and the accuracy. The first type usually consists of the coordinate measuring machine and integrated laser based technology. where a sound source is reflected off a surface and then distance between the source and surface is determined knowing the speed of sound. A trigger allows the user to only record specific point data once the stylus is positioned at a point of interest. Automatic focus cameras often use acoustic methods to determine range. Dynamic imaging is used extensively in ultra-sound devices where a transducer can sweep a crosssection through an object to capture material data internal to an object. magnetism or physical surface contact. Magnetic resonance is used in similar applications to ultra-sound when internal material properties are to be measured. 2. The speed with which the phenomenon operates as well as the speed of the sensor device determines the speed of the data acquisition. To conclude this section. either light.Chapter 2: Acquisition Classifications and Related Works 10 The final types of data acquisition methods we will examine are acoustic. The profile measurement of free-form objects can be analyzed.4 Reverse Engineering Applying Non-Contact and Hybrid Techniques The first non-contact techniques that we explore are that of Fan and Tsai . sound. The method is essentially the same as time-of-flight. where a magnetic field touches the surface and a hybrid of both contact and non-contact. Magnetic field measurement involves sensing the strength of a magnetic field source.
The technique focuses on modeling complex and free-form shapes of mechanical objects by comparing contact and non-contact methods for digitizing the surface. The results were compared based on the surface quality and the point cloud data obtained. If the system projects white light. Further reading of his work can be viewed in reference . white light area based systems will be limited in their ability to measure ambient lighting verses laser based systems. Refer to Fan and Tsai  for detailed information on the DFPM algorithm. Therefore. This method was used to describe the first set of measurement point and to generate reconstructed multiple patches of the surface. The effects of ambient lighting are discussed for non-contact systems. then no particular frequencies can be blocked out.  developed an integrated laser-based reverse engineering and CAM machining system called RECSI (Reverse Engineering and CAM System Integration). They have reported a reduction in the shape error from their technique compared to the initial shape error of the objects. They report the rigid body transformation from the optimal shape error results and the optimal parameters using the DSEAM.Chapter 2: Acquisition Classifications and Related Works 11 Fan and Tsai research adopted the bicubic uniform B-spline interpolation approach for the shape error analysis method. Chow et al. He demonstrates that noncontact techniques in conjunction with advanced surfacing and inspection software yield sufficient results for the mechanical design process. This is because it might be carrying the information required to measure the object. The shape error analysis main function is to sum the squared nearest distances . Further reading on their results can be viewed in reference . The B-spline surface construction and the DFPM algorithm are the foundation of their algorithm. and the DFPM (Davidon-Fletcher-Powell Method) algorithm. The results from Clark are of that modeled using a water pump. They developed a computer program to analysis the shape error with respect to the surface that is referenced. This algorithm is an adopted variation of the shape error algorithm. The results of their approach are of that used on a free-form surface and a car rear-view mirror case. . Whether or not the system can measure the ambient lighting depends on the projected color of light on the object. The water pump was scanned using both a contact and non-contact system. They evaluate the feasibility of using concurrent engineering and reverse engineering methods with the data from laser scanning to remanufacture complex geometrical parts. The first hybrid-based technique reviewed is that explored by Jim Clark . Based on this principle they have developed an algorithm called the direct method or DSEAM (Direct shape error analysis method).. A hybridtriangulation based hand held system integrated with a coordinate measuring machine is used for this approach. Clark summaries by writing that if a system projects laser light then the unwanted frequencies can be filtered out.
Chow et al. Clark  implemented a non-contact system that works in conjunction with surfacing and inspection software. The goal of the system is to show that an integrated reverse engineering and CAM machining system can make the remanufacturing process more automatic and efficient. Calibration Accuracy Accessibility Occlusion Fixture (placement) Multiple views Noise and incomplete data Statistical distributions of parts Surface finish Calibration is an essential part of setting up and operating a position-measuring device. 2. non-linear electronics in cameras. He also discussed some of the issues regarding the implementation of such systems for manufacturing purposes.5 General Constraints of Data Acquisition Techniques There are many practical problems with acquiring useable data. The second phase is the actual development of the system.Chapter 2: Acquisition Classifications and Related Works 12 The first phase of their research demonstrates that laser scanning and CAD model reconstruction can duplicate aircraft structural components accurately and efficiently within a given tolerance. i. and similar sources.. f. h. The samples were performed to evaluate the accuracy and efficiency of their concurrent reverse engineering system. The comparison table of the results and the time required to complete each step can be view in their paper . They reported their findings of the errors of the overall integrated system were close to the calculated errors in the results of the reverse engineering feasibility study. To summarize this section. . g. and second. first. b. Any sensing must be calibrated so as to. They demonstrate the accuracy and efficiency of their laser-based reverse engineering system. developed and implemented a process planning system that interfaces with a tightly coupled CAD modeling system and CAM tooling path. Systematic sensing errors can occur through lens distortions. to model and allow for as accurately as possible systematic sources of error. d. Chow et al. the major ones being: a. c. accurately determine parameters such as camera points and orientations. Fan and Tsai  implemented a non-contact system that utilizes CCD cameras and laser triangulation for reverse engineering. The system utilizes NC coding generated from the software.. e. Most of the papers cited .  results are that of the comparison between the original parts and the duplicated parts.
there are missing parts or parts obscured by other elements. Further ideas on surface extensions. but in other cases may lead to serious problems in identifying features. the tolerance distribution of the scanned part must be considered. However. that this also destroys the "sharpness" of the data i. which is scanned. but we need to reconstruct the whole surface from just the visible parts. because of the nature of optical and even tactile scanning. An important question is whether to eliminate the noise before. etc. typically sharp edges disappear and are replaced by smooth blends. There are times when the noise should not be eliminated at all. but note. Multiple scanning devices are one approach to obviate this problem. Noise can be introduced in a multitude of ways. As well as self-occlusion. after. Multiple views introduce errors in acquired data because of registration problems (see more details later). Statistical distribution of parts deals with the fact that any given part. This is partly necessary due to the above-mentioned inaccessibility and occlusion problems. is often an unavoidable step in reverse engineering. The geometry of the fixture used. the data close to sharp edges is also fairly unreliable. occlusion may also arise due to fixtures-typically parts must be clamped before scanning. When reverse engineering methods attempt to reproduce a given shape. which in some cases may be desirable. Occlusion is the blocking of the scanning medium due to shadowing or obstruction. There are many different filtering approaches that can be used. acoustic and magnetic scanners may also have this problem. only represents one sample in a distributed population. becomes a part of the scan data. This is primarily a problem with optical scanners. Through holes are typical examples of inaccessible surfaces. Distance from the measured surface and accuracy of the moving parts of the scanning system all contribute to the overall measurement error. or during the model building stage. Elimination of fixture data is difficult and often requires multiple views. This gives rise to multiple part scans and the averaging of the .e. Moreover. Noise elimination in data samples is a difficult issue. This usually requires multiple scans but can also make some data impossible to acquire with certain methods. Finally there are situations where only parts of a certain surface can be measured. from extraneous vibrations. Also central gravity of the part makes most surfaces of the object difficult to scan. though. but all methods of data acquisition require accurate calibration. specular reflections. Optical scanners' accuracies typically depend largely on the resolution of the video system used. intersections and patching holes are given in the last part of the paper. Noise filtering. Accessibility is the issue of scanning data that is not easily acquired due to the configuration or topology of the part.Chapter 2: Acquisition Classifications and Related Works 13 present some discussion of accuracy ranges for the various types of scanners. A similar problem is restoration of missing data.
more points are collected at highly curved surface portions. with no need for noise filtering and registration. however. such a device does not exist at present. . Tactile or optical methods will produce more noise with a rough surface than a smooth one. often only one is available. so it is accessible from all directions. But. the measurement is adaptive. particularly inaccuracy and incompleteness. despite the practical problems discussed. i. the process of recognition and model building can begin. Imagine an ideal scanner: the object is 'floating' in 3D spaces. However. Unfortunately. The data is captured in one coordinate system with high accuracy. Reflective coatings also can affect optical methods. it is possible to obtain large amounts of surface data in reasonably short periods of time even today using the methods described. makes these steps fairly difficult as will be seen in the following sections. etc. and indeed. The final issue we bring up is surface finish of the part being measured. Smoothness and material coatings can dramatically affect the data acquisition process. it may be somewhat impractical to attempt to sample many parts from a population. The imperfect nature of the data.e.Chapter 2: Acquisition Classifications and Related Works 14 resulting data. Possibly. Once the measured data is acquired.
they are all part of the acquisition step of the process. . In this section we will discuss the procedures to generating a successful 3D model of an object from single and multiple views. which is a line. which shows the intersection between the laser plane and the object. the CCD camera captures the scene. they are still considered single views.1 shows a sequence of range images obtained from the IVP Range Scanner. They are single views because one view cannot complete the reconstruction of the object.Chapter 3: Surface Reconstruction 15 3 SURFACE RECONSTRUCTION Obtaining a surface representation of objects and scenes has always been one of the most challenging and fundamental problems of 3D computer vision. The image acquisition process yields a number of selected range images. 3. The range images that are generated are of the view angles that the user has positioned the part for capture. The purposes of taking multiple views are to eliminate the missing data from the water pump. The structured lighting system that we use for our project generates a different set of images when performing the image acquisition process. Figure 3. Figure 3. The first scans from both systems are 2D representations of the real object. These are depth range images. Although both system produces different types of images. Although both the systems generate 2D views of the part. These images also show the texture of the part or object that is being scanned.1 Single Views and Data Segmentation An iterative process for 3D reconstructions of surfaces in static environments is defined by the following steps (see also Figure 1.2. 3. 4. which depicts this process): 1. Acquiring range images of the part Pre-processing acquired data (data segmentation) Data post-processing (data integration) Final 3D CAD model Range image acquisition is the first step of the process. The images obtained from this system are color range images. The black area in between the blades of the water pump show occluded areas.2 below is an example of the type of images that are generated from the structured lighting system used in this project. In image acquisition. The result is a grey scale-image. 2. We will also discuss various surface reconstruction algorithms that have been developed and employed.
1: (a) Original image of the water pump. (b) Sequenced single view range images of the bottom surface of the water pump generated using our laser range scanner .Chapter 3: Surface Reconstruction 16 (a) (b) (c) (d) (e) (f) Figure 3.
2 Multiple View Integration and Registration For all objects and parts to be scanned. 3. (c) back/left view of the ramp. Ideally. Pre-processing is more commonly referred to as cleaning of the collected data. we would have the part “floating” in 3D space .Chapter 3: Surface Reconstruction 17 (a) (b) (c) (d) Figure 3. we require a geometric model of the whole object’s surface. As mentioned before pre-processing includes reducing erroneous data.2: Textured single view range images generated from the structured lighting system. as stated above. The additional steps of the process are described in the next few sections. (a) left view of the ramp. (b) front view of the ramp. (d) front/ right view of the ramp. Pre-processing is applied to the single views individual before they are integrated and registered together. Pre-processing the range images is the step proceeding data collection. filtering noise and filling holes that may have occurred as a result of occlusions.
into a common coordinate system so that they can be integrated into a single 3D model.3 is an example of how our software registers two sets of range image view. This would be so that the scanner could move around the object from all sides to capture in a single coordinate system. Also. Figure 3. some views may have more details or some may have low resolution. This can be determined based on the size and material make up the part. We want to take these sets of registered range images of the entire surface of the object and from these images produce a corresponding set of parametric surface patches. if the scanner is fixed in position n. The individual range images must be aligned. 9-12. or registered. In high-end systems. the scanner may be attached to a coordinate measurement machine that tracks its position and orientation with a high degree of accuracy. 24. For this reason. The overlapping red region shows how they are aligned in the same shell. Approximate position and orientation of the scanner can be tracked with fairly inexpensive hardware in most situations. 6. In our system. Thus. For instance. contains one single view of the water pump and Figure 3. it is important to decide how many scans will be taken of the part or object. Each range image is a dense sampling of the 3D geometry of the surface from a particular viewpoint. The overlapping views should consist of the same area of the object being scanned.3 b is another single view. The more scans that are taken of the part.Chapter 3: Surface Reconstruction 18 (in a fixed position with a fixed orientation). In practice. and can be used as a starting point to avoid searching a large parameter space. Figure 3. generally. The most general formulation of the problem that makes no assumptions on type of features (in the range and/or associated intensity images) and initial approximate registration is extremely hard to solve. they are matched based on the distance computed using the ICP (Iterative Closet Point) algorithm in the software. at any one time. so part of it is inaccessible to the scanner. . the longer the whole scanning process will take. The main purpose for overlapping the different views is to omit occlusion in the object by matching various similar features on the object. This is the second step of the post-processing stage for 3D reconstructions. When reconstructing objects we want to have overlapping views of the object. Three or more points are matched based on similar corresponding features and feature location. Furthermore. the registration is a fairly easy process. the object will have to rest on some surface. registration may be performed by accurate tracking. This is commonly referred to as feature matching or extraction. 27. Special software is use to match similar features and points on the different surfaces scanned.3 a. 35]). it will be able to capture data from an even more limited region of the object’s surface.3 c is the registration of both views overlapping. Figure 3. Passive mechanical arms as well as robots have been used. it will be necessary to combine multiple views taken with the object placed in different orientations in front of the scanner. The goal of collecting multiple views of range images is to take these sets of range images and register the images. After the points have been selected for matching. Automatic feature matching for computing the initial alignments is an active area of research (recent work includes [3.
2. they can be merged together using various post-processing option offered in commercial software or by generated programs. As mentioned above in Figure 1. they are ready for post-processing. post-processing of range images includes surface .3 Post-processing Registered Images After all the views of the object have been obtained.3: Point cloud information of the side view of the water pump.Chapter 3: Surface Reconstruction 19 (a) (b) (c) Figure 3. (c) registered range image of the two side views. 3. (a) and (b) side view range images. After the two desired views are registered to each other at the ideal feature locations. The registration of the views may take several tries to achieve the optimal aligning of the images that is desired.
For example. One of the built in functions in the software is for merging the different range image views into one united shell. surface merging. line of sight error compensation is done by computing a scalar field that approximates the signed distance to the true surface . . The intersecting regions are locally re-triangulated and then trimmed to create one seamless surface. We are allotted three options. when performing the post-processing step. The range images are used to carve out a spatial volume then. The Volumetric merging option merges multiple shells into a single shell by allocating their geometry information to a reference voxel model with a volumetric method. Rapidform2004 is 3D modeling software that we use to generate our complete 3D CAD models of our parts. In our system. There are two techniques that are commonly used for this process. With the data obtained from each range image. Volumetric methods are also well suited to producing watertight models. In other words. Surface-based methods create the surface by locally parameterization the surface and connecting each point to its neighbors by local operations . Volumetric-based methods are useful for very large datasets. the registration process is made permanent.Chapter 3: Surface Reconstruction 20 smoothing and multiple view registration. Volume based merging is useful when surface based meshing creates poor merge results. The first is surfacebased methods and the second is volumetric methods. The partial connectivity that is implicit in the range images is made use of in other methods. they carve away the solid that lies between the scanner and each sampled data point. In volumetric methods. volumetric merging and point cloud merging. Solid modeling evolution from a series of range images can be demonstrated by Reed and Allen . It is a volume element of a rectangular shape of the subject being imaged. object definition can be obtained without holes in the surface. This polygon-merging tool helps you to merge scanned data with many holes and messy boundary. The vertex positions are then readjusted to reduce error. This is based on a weighted average of distances from sample points on the individual range image scans. Overlapping shell regions between the two separate shells are removed and neighboring boundaries are connected together with newly added polygons. Redundant overlapping triangles are then eroded for removal from the partial meshes. Turk and Levoy  zippering approach works by triangulating all the range scans individually. Post-processing operations are often necessary to adapt the model resulting from scan integration to the application at hand. we use special software to perform the post-processing of the individual views. A voxel is word created from two words (vector and pixel) to describe the 3D space of a pixel-based image. The surface merging option merges shells of range views that have been aligned by the registration process into one united shell. or bad aligned data. This is the step before completing the 3D reconstruction of an object or part.
A laser-type scanner radiates a line of laser beams. and observes the intersection of the object and laser through electronic cameras. This description describes a static system that only measures points where the laser line and object meet. Triangulation systems are often classified as being either active or passive . This system uses a laser stripe for acquisition. a point type laser scanner obtains only one point at a time. 4. A typical triangulation scheme projects a point or line (sheet) of laser light on an object. y and z directions. Depending on the specific needs for design. Passive methods such as stereo or photogrammetric systems use only cameras. laser-scanning devices can also be classified on those bases . the 3D laser-scanning device can acquire the surface information of the part. Consisting of a beam projector radiating the laser beam. By sending laser beams radiated from the surface and received by CCD cameras.Chapter 4: System Descriptions and Setup 21 4. In contrast to the stripe. called a stripe. the different options fall under the category of contact and non-contact. Industrial settings however use systems that are active. This is to fully cover the area of the object. Too fully measure an object in 3D spaces. we have chosen two different profiling systems for our project. a particular type is chosen. This is done using some form of camera or light sensing electronics. In that they project some form of illumination onto the object and measure the position of the illumination on the object .1 IVP Range Scanning Profiling System There are a number of 3D laser scanners commercially available. the laser scanner and a CCD camera sense the reflected beam from the surface. Both profiling systems will be discussed in the next two sections. Laser stripe and point type are normally how laser beams can be categorized. The IVP Ranger SC386 is a laser triangulation scanner for range profiling using the MAPP family . The vast majority of 3D non-contact systems employ triangulation. As mentioned in section 2. SYSTEM DESCRIPTIONS AND SETUP There are many profiling systems that can be used to capture data of objects for reconstruction. onto the surface so that several points can be acquired at once. The first profiling system that we employ to reconstruct a 3D model of an object is the IVP Range Scanning System. the sensor is moved in the x. Based on the configuration of the machine. To emphasize the objective of this project and the flexibility of non-contact options.1. A typical triangulation sensor diagram is shown below in Figure 4.
Calibration of the IVP Range Scanner involves identifying the correct world coordinate data for the system so that the measurements of the scanned object match both in real world data and transformed data. the system must be correctly calibrated before every successful set of scanned data. Figure 4. There are forty total points that must me identified. The speed of the laser is controlled through the software that is associated with the system.2 b shows the Smart Camera and motor in more detail. A thin laser light is projected onto the object and the CCD sensor from the camera detects the scan line (the peak of the reflected laser light). The camera and laser are fixed on a stable structure that moves in a horizontal direction. The profiles are displayed as a set of range images. The speed can be adjusted depending on the quality of scanning that is to be achieved. they move at the same speed. Figure 4.1: Principle of a laser triangulation system. Figure 4.Chapter 4: System Descriptions and Setup 22 Smart Vision Cameras. ranging from 0 to 39. For data to be acquired using the IVP Range Scanner. In the . Arrangement for the IVP Smart Vision Camera and Laser The IVP Laser Scanner consists of a thin laser light and a camera that are used to obtain the profile of objects. Figure 4. The black box houses the motor for the system.3 is an example of the calibration grid used to calibrate the IVP Range Scanner. Baseline distance Range distance Field of View Sensor (CCD Camera) Objects Figure 4. Because the laser and camera are fixed on the same belt.2 is the IVP systems equipment and configuration used for data acquisition. To calibrate the IVP a calibration grid is used to number all the coordinate data points.1 shows the angle placement of the camera to the laser light for the IVP Ranger System.
2: Equipment setup for IVP Range Scanning System. all the black dots must turn blue to be recognized by the sensor. (a) front view of the IVP Range Scanner.Chapter 4: System Descriptions and Setup 23 calibration grid.4 is the user interface for the IVP Range Scanner. (a) Camera Motor (b) Figure 4. the camera image of the object is displayed along with the object profile and range image that is acquired. In this user interface window. (b) side angled view of the top of the IVP Range Scanner . Figure 4.
The object profile can be seen as .Chapter 4: System Descriptions and Setup 24 Figure 4. Also the lights are turned out to obtain the correct object profile of the calibration grid.4: IVP Range Scanner User Interface During calibration of the IVP Range Scanner the lights are turned off so that the calibration grid can be viewed by the camera source.3: Calibration grid for the IVP Range Scanner Figure 4.
Figure 4. This can be achieved with the sequence of projections using a grid of . (refer to Figure 4.2 Genex 3D FaceCam Profiling System As previously mentioned in section 2. 4.2). structured lighting is the projection of a light pattern (plane. the camera and object. grid. green and blue repeating. The field-of-view refers to the measured distance between the lasers’ light. The light grid has a rainbow color effect with the colors red. they are coded (Coded Light Approach) with different brightness or different colors.Chapter 4: System Descriptions and Setup 25 the white line on the calibration grid in Figure 4. the stripe pattern is projected by multiple stripes at once onto the scene. This method requires only a small number of images to obtain a full depth-image. This is the basic principle behind depth perception for machines. The distortion along the detected profile is used to compute the depth information. In most cases.3.5 shows an example of a structured lighting grid projected onto our metallic ramp object. In order to distinguish between the stripes. structured lighting can be described as active triangulation. In our acquisition system.5: Structured light grid pattern projected on the ramp with a neutral tan background The Coded Light Approach (CLA) is an absolute measurement method of direct codification . Figure 4. Active triangulation is a simple technique to achieve depth information with the help of structured lighting to scan a scene with a laser plane and to detect the location of the reflected stripes. The goal of the object profile during calibration is to make sure that the entire object will be in the field-of-view of the camera and laser during data acquisition. or more complex shape) at a known angle onto an object . Scanning the object with the light pattern constructs 3D information of the shape of the object.
8. Our purpose for selecting this system is to explore the accuracy and limitation of the machine for reverse engineering of automotive parts. This snapshot was taken directly from the user . Direct codification is usually constrained to neutral color objects or not highly saturated colors.6 below). left and centered image are obtain from the different cameras lens because of their view position in the system set up. Figure 4. The tan background can be seen clearly in Figure 4.Chapter 4: System Descriptions and Setup 26 vertical lines (light or dark). In our second acquisition system for this project we use the Genex 3D FaceCam System for reverse engineering purposes.8. a digital camera and a single projector are used verses one single camera and single projector as mentioned earlier. the two can be easily distinguished. with an allotted distance of ±15 cm. In this system configuration. There are two common forms of coded light approaches. coding based on grey levels and coding based on color. The total distance is 85 cm. (see Figure 4. there is a specified distance of how far the object can be away from the Genex 3D FaceCam System when acquiring data.7 shows the allotted distance when acquiring data from the Genex 3D FaceCam System. Also in Figure 4.6: Genex 3D FaceCam System In our system. This is so that when eliminating the background information. A right. The digital camera is located on top of the center lens. the projector is located under the center lens. In order to achieve a pattern where each pixel coordinate can be directly obtained. All the lines are numbered from left to right. Figure 4. three regular cameras. When scanning. The use of three cameras yields three separate images in the results. it is ideal to use a large range of color values or reduce the range and introduce periodicity in the pattern. the coded light pattern projected onto the ramp is seen. In this system. Figure 4. the background should be a neutral color from the object.6a is the Genex 3D FaceCam 500 System used in this project.
Chapter 4: System Descriptions and Setup 27 interface screen of the Genex 3D FaceCam system after the left side of the ramp data was collected. as mentioned previously in this section.8: Genex 3D FaceCam User Interface screen that displays the left. center and right camera photos of the object . Camera 1 Camera 2 Object placement Camera 3 85 cm Back ground Figure 4. Three different view angles are captured with the three cameras to generate a complete model of the left side of the image.7: Distance specification for data collection Figure 4.
The sloped side measures 3. The first part is shown below in Figure 5. data segmentation (pre-processing and post processing) and a 3D CAD model. This difference in angles does not pose any major difference in obtaining a 3D model of the part. we present results of our complete modeling process: data acquisition.4 inches. The ramp object in Figure 5. the side with the holes is a perfect 45 degree angle.1. The measurements of the ramp are 4 inches on the base for length and the height. then the angle is slightly higher at 47 degrees. 5. This specific ramp was specially designed for this project. We will show a comparison of the actual data to the original data. With the current position of the ramp according to the photo below (Figure 5. If the part is flipped to have the holes on top. The weight of the ramp is about 48 ounces (1. The main reason for using a ramp shape is that it should be easy to measure and compare the real world dimensions to the CAD coordinate measurements after the complete 3D model has been generated.5 inches. For this project. The corner angles on the ramp are 90 degrees for the back and bottom surface. there are three parts (objects) that have been identified.Chapter 5: Results and Discussions 28 5 RESULTS AND DISCUSSIONS In this section. All the selected parts are composed mostly of metal. a ramp and an arm pulley. there was no predetermined criterion for selecting the parts.587 grams). The main purpose was to select parts that were ideal for our system setup and that could be easily rotated. .1 Part Identification Our research is geared toward reconstruction of automotive parts. a water pump. while the two shorter sides measure 1. We will also show a comparison of the data results obtained from both systems. It was designed on the basis of obtaining ground truth information for the structured lighting system used in our experiments.1). For our part selection.1 is not classified as an automotive part.
(c) Figure 5. The size of the water pump is 10 inches in length.5 inches on the flat end. 5. 4 in.2 was selected because of its complexity in shape and size measurements. there is no defined center of gravity. (b) 3. The water pump top base screw has a black tented color. that it would not be balanced. Due to the water pump having a non-symmetrical shape. The total height of the water pump is about 4 inches on the larger half and 1. (b) back and top views.1: Photos of the ramp part (a) front and side views. .5 in.4 in.5 inches wide on the largest area and 2. The center of gravity refers to if the positioning of the water pump placed on the opposed side.5 in. (c) ramp measurements The water pump in Figure 5.Chapter 5: Results and Discussions 29 Front Back (a) 1. 1.5 inches wide on the lower base. 4 in.
(a) (b) Figure 5.8 inches from the ground or working surface.3 is the pulley arm that was selected as the third object to reconstruct.Chapter 5: Results and Discussions 30 (a) (b) Figure 5.2: Photos of the water pump part (a) top and side views (b) bottom view Figure 5. The height measurement is 2. The pulley’s black circle ring measures 1. It also contains circular holes on the object that may cause pose a challenge in modeling due to occlusion. The pulley has a black circular ring located on the top end of the part.5 inches while the center measures 2. The pulley arms measures about 11. (a) is the top view (b) is the bottom view .5 inches.5 inches in length. The width contains three different measurements.3: Photos of the pulley arm.5 inches and the lower base measures 1. All measurements are taken by hand.
4: CAD model images of the ramp (a) left/front view. 5. (d) back view In the CAD model images of Figure 5.4.4 a. Figure 5. These different images are different view angles taken of the ramp.4 b is the left side of the ramp. (a) (b) (c) (d) Figure 5.4 c is the bottom view of the ramp and d is the . Figure 5. Each different set of scans were taken with different varying conditions. a through d.4. It also shows the other half of the front view from a different angle. The ramp posed a challenge while scanning because of the surface finish. are some CAD model examples of the ramp.1 IVP Laser Range Data Results The first set of data results we will discuss are the results from the ramp using the IVP Range Scanner. the various views represent different angles of the ramp. of the part. (c) bottom view. Figure 5. Figure 5. Each section is a different set of scans. taken at different times. is the right side of the ramp. This figure also shows part of the front view.Chapter 5: Results and Discussions 31 In the next section we will display our results from both the IVP Range Scanner and the Genex 3D FaceCam System. The ramp has a highly reflective surface that caused areas of the ramp to be occluded while collecting the data at different orientations. These conditions include the ambient lighting from the room and outside lighting from the windows and doors. The results will be separated into three preliminary sections. (b) right side view.
Chapter 5: Results and Discussions
back view of the ramp showing the holes. Some of the details of the holes have been lost due to smoothing the surface of the ramp. More complete CAD models can be seen Figure 5.5. In these CAD images, the holes have been filled and the ramp’s surface has been smooth a second time. We then obtain a complete 3D CAD model of the ramp.
Figure 5.5: 3D Ramp CAD models, (a) left/front side view, (b) back/left side view, (c) bottom view, (d) right/top/ front side view The second set of data collected using the IVP Range Scanner was of the water pump. The water pump image can be seen in Section 2.1, Figure 2.1. The lighting factor for this system does not affect the data as much as it affects the Genex System data. The reason is because a filter can be placed on the camera lens to filter out any unnecessary light. For our experiments, we did not use the filter because the light source in our room environment was not a major issue. Figure 5.6 is the first attempt at reconstructing the bottom of the water pump. Figure 5.6 a, is a merge of the different views to obtained this CAD model. After merging the views using the volumetric merge technique, overlapping areas created holes in the model. The
Chapter 5: Results and Discussions
holes were filled to have a more complete water tight model. Part a, of Figure 5.6, is the solid mesh model and part b shows the point cloud information after the holes were filled in part a. The point cloud information usually shows the distance (spaces) between each point cloud, while the solid modeling technique shows a smooth continues image or view. The range images used to reconstruct the bottom surface can be seen in Figure 3.2 above. Figure 5.6 c is the solid mesh model after the holes filling was applied.
(c) Figure 5.6: Reconstructed bottom surface of the water pump, (a) Point cloud information, (b) Solid mesh model, (c) smooth reconstructed mesh model Figure 5.7 a, is a reconstruction of the top view of the water pump. It is displayed in varying colors to show the depth information of the water pump. The blue represents the highest part of the water pump while the orange color represents the surface that is closest to the ground. The green color represents the medium height level of the water pump. This image can be compared to the original photo in Figure 5.2. Figure 5.7 b, is the depth information or rotated side view of the top angle. This view gives a more vivid description of the depth of the water pump. This top view does not have the side views of the water pump merged to it.
Chapter 5: Results and Discussions
(b) Figure 5.7: Reconstructed views of the water pump, (a) reconstructed top view (b) side view reconstruction In Figure 5.8, these images depict the attempts of modeling the right and left side of the water pump. The areas that have occluded data must be taken at a different angle to the side views completely. Figure 5.8 a, and Figure b are the CAD model point cloud data information. Figure 5.8 part a as well as b show the height information in varying colors. Part c and d of Figure 5.8 are the second attempts at modeling the side views. The missing data can be seen clearly in these views. These views will be merged together to complete the side profile of the water pump. Figure 5.9, shows the complete 3D CAD model of the water pump obtained using the IVP Range scanner. In this figure, the front complete view of the water pump is shown. Again, the varying colors show the height changes in the water pump. The pulley arm results are displayed in Figure 5.10 b and a below. In this figure, the original photo image of the pulley arm is shown as well as the point cloud information. Figure 5.10 a, is the bottom side of the pulley arm and Figure 5.10 b is the top side. The color variations in the CAD model images are the height relative to the laser light. To complete this model, additional view must be merged to the current views to have a successful 3D model of the pulley arm. This will eliminate all the holes and occluded areas of the pulley arm.
9: Point cloud model of the water pump (a) CAD model showing height variations (b) top view. (c) back view .Chapter 5: Results and Discussions 35 (a) (b) (c) (d) Figure 5. (b) left view. (a) right view. (c) right view and (d) left view (a) (b) (c) Figure 5.8: Cleaned point cloud data of the side views of the water pump taken with the IVP Range Scanner.
we used a box as the neutral back ground.14 shows an example of the water pump placed in the box to control the light and still maintain a neutral background. Figure 5. Controlling the ambient light was important in the case of the ramp because. the ramp is position on a small black object to ensure that the detailed edges of the ramp are captured when collecting the data. In part c and d.10: Pulley Arm (a) photo of the side view profile of the pulley arm.14) also served the purpose of controlling ambient light that was present from the window and room lighting. The box (Figure 5. there must be a neutral back ground to contrast against the object being scanned. parts a. We wanted to eliminate as much of the light being reflected off the part back into the camera lens. (b) point cloud CAD model of the top side 5.11 are a few textured range images collected from the Genex System. the black regions on the ramp are areas that were not captured due to positioning of the object and lighting factors.11. The different images show different views and orientations of the ramp.2 Genex Structured Light Data Results To collect data using the Genex 3D FaceCam System. and b. This would minimize the occlusion in the data sets. For the first set of data collection. as mentioned before. (b) point cloud CAD model of the bottom side. Figure 5.Chapter 5: Results and Discussions 36 (a) (b) (c) Figure 5. In Figure 5. . the part was highly reflective.
To solve this problem. Figure 5.12 are the reconstructed front and left views. Figure 5. more scans are taken of the ramp and merged to the already acquired range images. Part c and d of Figure 5. there is data missing in sections of the ramp.12 b. This is because these parts of the ramp reflected the light that was projected onto the object by the cameras during data capture. Figure 5.1 of section 5. is the back view of the water pump. is the right side of the ramp.12. All the views can be compared to the original photo image in Figure 5. More views of the ramp must be taken at different angles to fill in the missing data. . (b) right/ back side view. (a) back view.Chapter 5: Results and Discussions 37 (a) (b) (c) (d) Figure 5. (c) left view and (d) front view In the images that were captured of the ramp.11: Textured ramp images using the Genex System. In this reconstruction some of the edge detail is missing (occluded) due to the edges being reflective.12 a. Examples of the missing data can be seen clearly in the CAD models of the ramp displayed in Figure 5.12 are the first attempts of reconstructing the ramp using the Genex System.
these CAD models images show the edge details that were missing and also have a smoother finish.13 are the complete 3D CAD models of the ramp obtained from the Genex System.12. Figure 5. (c) front view and (d) left view Figure 5.13 a. is the front CAD model view. With proper smoothing. (b) back view. In Figure 5. Compared to Figure 5. Figure 5.13 b is the back CAD model view. This is the second attempt at modeling the ramp. this unevenness can be fixed. (a) right view.13 b. you can see that there is some miss alignment of the views causing the unleveled edge detail. .Chapter 5: Results and Discussions 38 (a) (b) (c) (d) Figure 5.12: Genex System reconstructed views of the water pump.
13: Complete final 3D model of the ramp (a) front view (b) back view (c) side view (d) bottom view Figure 5.14: Water pump placed in box for neutral background .Chapter 5: Results and Discussions 39 (a) (b) (c) (d) Figure 5.
The blades of the bottom surface can be seen clearly and the edges are more distinct as you view the images starting from b and ending at d. Figure 5.16 d has a smoother surface finish compared to b and c.15 shows a few of the textured water pump range images that were captured using the Genex 3D System. (b) view of the bottom surface. Figure 5. The reconstruction efforts yielded successful results as far as most of the detail of the water pump. is the point cloud CAD model of the bottom surface. The edges of the water pump are not refined or smooth.Chapter 5: Results and Discussions 40 Figure 5. It is easily cleaned away using some data reduction techniques in the software. These CAD models displayed are the first attempts at reconstruction of the bottom surface of the water pump. b through d are more detailed CAD models of the bottom surface. The images contrast has been enhanced for better viewing of the details captured.16.16 a. (c) side view and (d) top view Figure 5. more views can be merged to the current model. To obtain the detail of the holes along the outer brim of the water pump. .15: Textured water pump images using the Genex System. The excess background does appear in the raw data files from the Genex 3D System.16 are the CAD models of the water pump. (a) top view. Also Figure 5. These images also show some of the background from the box that was captured. (a) (b) (c) (d) Figure 5.
there are occlusions in the center of the pump. a good percentage of the detail.17 b. seen in the second attempt of Figure 5.17 b.17 displays the first and second attempts of reconstructing the top surface of water pump. with any smoothing technique that is used. some of the detail of the CAD model will be lost. is still maintained. Some hole filling was also performed on the CAD model. (a) point cloud data. The loss of detail can be seen on the outer brim on the holes of the model in Figure 5. In the first attempt.Chapter 5: Results and Discussions 41 (a) (b) (c) (d) Figure 5. (a) (b) Figure 5. However. (a) surface reconstruction with missing data and (b) complete surface reconstruction with filled holes . Nevertheless.16: Genex System reconstructed bottom surface of the water pump. on the ridges.17: Genex System reconstructed top surface of the water pump. unsmooth surface textured and (d) smooth textured surface Figure 5. (b) solid mesh model. The occluded data is recovered with more range scans. The hole filling creates a more complete and smooth surface in the final model.
This extra smoothing results in the loss of important details. A better angle of the views can be seen in parts c and d of the image. There is however a drawback to smooth the surface several times. By applying the smoothing technique again to the model. The views show the front. When merged together. Part a. Figure 5. The final merging can be seen below.18 a. they show a more complete top and side view. it becomes more distinct of how the views are merged together to create a complete model. are some CAD models of the side views of the water pump using the Genex System. and b is the left side view. The views of the right and left side are used to complete the top and bottom surfaces of the water pump. The next section is a conclusion of this paper.18 d is a merging of two different views to complete the left side of the water pump. This drawback is that the smaller details of the edges and raised surfaces may also be smooth. This 3D CAD model shows all the details of the original water pump. Once the left side is complete. to f. These images show more detail of the sides of the water pump.19 shows four different views of the water pump. the details of the side profile of the CAD model still contains some unsmooth surfaces.18 a. The first two images in Figure 5. in Figure 5. right side.18 c is another right side view that is angled to show how much of the detail is captured when the part is repositioned at a new orientation. As the CAD images are viewed from Figure 5.Chapter 5: Results and Discussions 42 Figure 5. Figure 5. Figure 5. left side and back of the water pump. is the right side and part b is the left side. the right side CAD model can be merged onto the existing model. is the right side view.19 are the original photos of the water pump. . The photos are placed there for comparison purposes to the 3D model in c through f. As compared to the original water pump image.19. Figure 5. the unsmooth surfaces will even out.18 below.
(e) front/angled view and (f) top/left angled view . (c) right/angled view. (d) left/angled view. (a) right view. (b) left view.Chapter 5: Results and Discussions 43 (a) (b) (c) (d) (d) (f) Figure 5.18: Genex System reconstructed views of the water pump.
(c – f) Final 3D CAD model views of the water pump.Chapter 5: Results and Discussions 44 (a) (b) (c) (d) (e) (f) Figure 5.19: (a) and (b) Water pump photo image. .
20: Standard deviation calculations of the IVP ramp overlapped with the Genex ramp. a comparison of the IVP Range Scanner and the Genex 3D FaceCam is done. Figure 5.20 are rotated examples of the standard deviation calculated from both ramp models. We first compare the ramp models generated from both systems.3 Comparison: IVP Laser Range Scanner and Genex 3D FaceCam Data In this section. In this figure.Chapter 5: Results and Discussions 45 5. . The blue regions represent the surfaces that are touching. a b c d e Figure 5. the IVP ramp and the Genex ramp are overlapped on top of each other to compare the surface differences.
. The distance values shown here may be a little different from the ones shown in real time when the deviation is calculated. The standard deviation is calculated by registering the two models into one shell and then using the standard deviation option in the software to perform the calculations.9 mm distance apart shown in Figure 5. The red region represents a maximum deviation of 3. Figure 5. in the overlapping surfaces the blue region represents a distance of 0 mm between the two surfaces. The measurements are off from the actual measurements. This is because the points used to determine the measurements are chosen manually. Figure 5. The average of the deviation between the surfaces is calculated to be about . This is because the picking of the points are manual and leave error for not always being precise. The derived measurements are from the 3D CAD model that was generated from the data collected using both systems.Chapter 5: Results and Discussions 46 Meaning that. The position 0.91693 mm.22 is to show the comparison with the measurements of the sides used in the table.0 means zero positioning of the coordinate distance.21.21: Standard deviation of the IVP ramp overlapped with the Genex ramp model Table 1 below shows the derived measurements of the ramp compared to actual measurements.
This comparisons table was generated based on observations of the systems performance.5 in. Table 1 Actual Ramp Measurements width: 3 in (76.012 mm) 1. The contrast is that the IVP Range Scanner requires some assembly of the system. and overall flexibility to acquire data.93 mm) 3.6 mm) height: 4 in (101. The IVP Range Scanner requires fixing the object at particular angles to capture the same profile. Take the water pump for example (Figure 5.814 mm) Genex 3D FaceCam Results 2. (33. This could require placing the object on a smaller object that is able to absorb the laser light or be hidden from the laser.86 in (98. the Genex system is able to capture the side and either some area of the top and bottom of the water pump.1 mm) IVP Laser Range Scanner Results 2. in that they can both be easily moved.02 mm) Table 2 is a comparison of both the IVP Range Scanner and the Genex 3D FaceCam system.6 mm) 1.422 mm) 3.78 in (96.6 in (91.044 mm) 1.22: Simulated ramp.5 in 4 in 4 in 3 in Figure 5.948 mm) 3.62 in (91.2 mm) length: 4 in (101.93 in (74.44 mm) 3. The contrast in the systems is that when setting up the IVP Range Scanner to .3 in. there is less occlusion in the data. The Genex system has the advantage of fewer views when the object is of high complexity. Fewer views means. Both systems are user friendly as far as setup and data collection is required. The flexibility of both systems is the same in comparison. (35.95 in (74. This is primarily because the current setup is built in house according to our specifications. This makes it easier when overlapping the views to recreate the object.41 in.Chapter 5: Results and Discussions 47 1. (38.2). This picture shows the measurements of the ramp. because the side of the water pump is small in area.
in the process of calibrating the IVP.Chapter 5: Results and Discussions 48 acquire data. what does affect the Genex system is the ambient lighting that is in the room. As mentioned previously. The way to over come this problem is the next best view of capturing the object from certain angles where there is overlap of the surfaces. The Genex system has the option of reducing noise when the post-processing is performed on the collected data. The noise in the images from both systems is not a big factor when collecting data. the system must be calibrated before any data is collected. However. Also the data must have some y-scaling applied to it to make sure that the acquired data measurements are similar to the real data of the water pump. Both systems pose the same problem when registration of different views comes into play. then needs to be redone Portability of the system is not optional Difficulty registration when not symmetrical Less noise in previous scans with new setup 2 3 4 5 6 7 8 9 Genex System Creates a distortion from the ambient lighting Difficult in position of objects with current system set up Needs fewer views to complete 3D object No calibrating or y scaling No manual conversion of the world coordinate data Overall system is user friendly Portability is manageable Difficulty registration when not symmetrical Overall system has better resolution . The noise for the IVP can be reduced as long as the speed of the scan is sufficient to have a smooth surface. The ambient lighting (if there is too much present) can saturate the object causing distortion to the data acquired. the IVP creates a shadow affect of the object. The achieve a smooth surface. This is because of the laser light reflecting the object. the speed should be slowed as to avoid ridged jerks of the laser as it scans the profile of the object. the world coordinate points have to be input manually to make sure that the data is transformed properly into the 3D software. The Genex 3D FaceCam does not have this problem. After the scans of the object have been collected. Table 2 IVP Range Scanner 1 2 3 4 5 6 7 8 9 Creates a shadow effect when scanning 1 Difficult in position of objects (complexity in shape) Needs more overlapping views Calibration of the system and y scaling after scans Manual input of the world coordinate information If calibration is not correct. This is because the Genex system captures data from the front of the object as opposed to on an angle like the IVP Range Scanner.
The Genex System is more user friendly verses the IVP Range Scanner because there is no calibration involved in the set up process. The IVP Range Scanner and the Genex System have different advantages to the systems that make them optimal choices for data acquisition task. guarantees the longevity of the equipment. Successfully modeling the water pump using the Genex System. Following the correct steps when turning the system on and off.Chapter 6: Conclusion 49 6 CONCLUSION Reverse engineering of geometric models and parts for CAD use is a rapidly evolving discipline in which interest is currently high. A triangulation based system does not have the problem of ambient lighting affecting acquisition of data. we will discuss some of the benefits of using both systems. The IVP Range Scanner has the advantage that ambient lighting is filtered better through the use of filters that can be attached to the lens. For the parts used in our experiments. Successfully modeling the top and bottom surfaces of the water pump using the IVP Range Scanner. This is due in part to the recent commercial availabity of active non-contact systems that can produce some level of sufficient accuracy for many applications. Lasers and Structured Lighting Applications. we have successfully made several achievements to the reconstruction efforts of modeling our selected parts using two different systems. There are however. Our achievements are: • • • • Literature survey on reverse engineering using CMM. The structured lighting system is affected by the . We also highlight some of our achievements in generating the 3D CAD models using both systems. with two different techniques. the use of the filter was not required. necessary steps that must be followed to assure that the Genex Systems components are not affected when operating the system. Using a laser-based triangulation system and a structured lighting system has many different benefits that make both systems ideal for 3D reconstruction. As we revisit the data acquisition systems that were chosen for this project. Successfully generating a 3D model of the ramp using the Genex 3D FaceCam System. In this project.
However because of the structural setup of the IVP Range Scanner.Chapter 6: Conclusion 50 amount of ambient lighting and natural lighting sources. Both systems produced results that are less noisy and have smoother surface textures. The ambient lighting however was controlled through the use of additional backgrounds placed around the part. different angles could be captured. Also because the system is flexible in location. After multiple scans were acquired. . For the water pump. Even though the two techniques modeled the parts using different methods. The reason is because of the flexibility of relocation and adjusting the IVP Systems components is far too tedious. position was a challenge. the ambient lighting can also be better controlled. the 3D CAD model can continue to be improved upon. With the use of smaller objects. If multiple views of all the selected parts are acquired. This plays an important factor when scanning objects that have high reflective surfaces. the water pump and the ramp had CAD models generated from the data. more scans are required to produce a complete 3D model of the part. the post-processing of the data was similar. The Genex System can be relocated at the discretion of the user. The models will show to have less occlusion or missing data and have data that is free of noise and abnormal surfaces due to surface reflectance or object positioning. Using the laser scanning technique to model parts and the structured light technique both proved to have successful and promising results. The importance of obtaining multiple single view profiles was also discussed.
10. 2002. McKay. Li. pp. J. 1998. 1992. S. V. Chen and Y. C. pp. C. Rushmeier. IEEE Transactions on Pattern Analysis and Machine Intelligence. 3. Jain. Journal of Materials Processing Technology. Industrial Metrology. pp. 2. “Freeform Textured Surfaces Registration by a Frequency Domain Technique”. Image and Vision Computing. 2000. and K. F. M.          . Vol. “The 3D Model Acquisition Pipeline”. Vol. No. Vol. pp. IEEE Transactions on Pattern Analysis and Machine Intelligence. Lucchese. Y. Vol. 133. K. Vol. pp. 19. F. No. pp. “Investigation Into the Performance of Probes on Coordinate Measuring Machines”. pp. 149-172. No. Sensor review. Chow. 2. Proceedings of the International Conference on Image Processing. No. Butler. Y. 1131–1138. “Development of an Integrated Laser-Based Reverse Engineering and Machining System”. “Implementing Non-Contact Digitization Techniques within the Mechanical Design Process”. Xu. “Accuracy Analysis of 3D Data Collection and Free-Form Modeling Methods”.145–155. 2003. 1991. 1997. Dorai. M.References 51 REFERENCES  R. 2000. “Self Recalibration of a Structured Light Vision System from a Single View”. S. International Journal of Advance Manufacturing Technology. D. pp 59-70. Bardell. Chen and G. 2002. 1. C. Balendran. “A Method for Registration of 3D Shapes”. Cortelazzo. Vol. Sivayoganathan. ICIP ’98. 2. G. Bernardini and H. 2539-2544. T. Kengskool. No. 3. Doretto and L. 813–817. 26-33. IEEE International Conference on Robotics and Automation. Vol. Computer Graphics Forum. Weng and A. P. J. J. pp. J. 1992. No. 10. Medioni. Clark. 21. 14. pp. “Object Modeling by Registration of Multiple Range Images”. 20. “Optimal Registration of Object Views using Range Data”. 239–256. 19. Besl and N. Lee and K. 195-201. G. 186-191.
Journal of Manufacturing Systems. W. pp. “Automated Inspection of Free-Form Shape Parts by Laser Scanning”. A. Vol. 2. “A Part Image Reconstruction System for Reverse Engineering of Design Modifications”. Fisher. 1012. 1998. 1992. 201210. 1983. 3. pp. "Robust Automatic Surface Reconstruction with Structured Light. No. Lee. 5. 8. 1988. A Laser Time-of-Flight Range Scanner for Robotic Vision. Eggert. Vol. No. Proceedings of the SPIE. No. pp 383-395. Vol. 259. pp.Dimensional Image Data by a Scanning Laser Range Finder”. D. IEEE Transactions on Pattern Analysis and Machine Intelligence. 19. 10. Mercer. K. Robotics and Computer Integrated Manufacturing. 5. pp. A. H. Tsai. Motavalli and B.           . Heikkinen. 709-713. "Data Reduction Methods for Reverse Engineering". Maas. 253– 272. Vol. IEEE Transaction on Robotics ad Automation." International Archives of Photogrammetry and Remote Sensing. In-Process Optical Measurements. 29. K. Jain and C. No. 17. I. 28. Chen. “Automatic Recalibration of an Active Structured Light Vision System”. pp. H. W. Robotics and Computer Integrated Manufacturing 16. The International Journal of Advanced Manufacturing Technology. 2001. Lee and H. Computer Vision and Image Understanding. 2000. IEEE PAMI. G. 215-222. 69. A. Wang. S. “Non-Contact Measurement Using a Laser Scanning Probe”. Jarvis. pp 505-512. Vol. part B5. Vol. H. Myllyla. Bidanda. Modjarred. 1998. R. H. Dorai. Suk. 1989. A. Vol. Y. F. pp. T. pp 897-902. Y. “Simultaneous Registration of Multiple Range Views for use in Reverse Engineering of CAD Models. 5. Fan and T. pp. K. Park.735-743. Fitzgibbon and R. vol. 83–89. 20. 2001. Li and S. 1. Vol. B. R. Woo and T. No. “Registration and Integration of Multiple Object Views for 3D Model Construction”. 229-239. pp. 2003.268. No.References 52  C. K. “Acquisition of Three. “Optimal Shape Error Analysis of the Matching Image for a Free-Form Surface”. Moring. Journal of Optical Engineering. 1991.
17.           . Vo. “A Hole-Filling Strategy for Reconstruction of Smooth Surfaces in Range Images”. “3D Modeling from Range Imagery: an Incremental Method with Planning Component”. “3D Object Description From Stripe Coding and Multiple Views”. Aggarwal. Mosse. “A Laser Scanning System for the Measurement of Facial Surface Morphology”. pp. M. Reed and P. pp. 1999. K. No. pp 179-190. 2003. pp 669-680. 113. pp. 1994. 1999. J. on Robotics and Automation. Thompson. Oliveria. Levoy. 60. In Proceeding of the 2nd International Conference on 3D Digital Imaging and Modeling. 17. C. Proceedings-16th-Brazilian-Symposium-onComputer-Graphics-and-Image-Processing-SIBGRAPI-2003. Computer Graphics Proceeding Annual Conference Series. In A. W. 1999. Vol. No. Soucy and D. 99-111. pp. Owens and H. A. 1989. G. H. Vol. Wang and J. “Registering Two Overlapping Range Images”. Journal of Optics and Lasers in Engineering. 1995. 10. Scharstein and R. 4. 2003. S. Ottawa. Proceedings of the 5th Scandinavian Conference on Image Analysis. 1991. C. 1344-358. pp 85-92. “A General Surface Approach to the Integration of a Set of Range Views”. B. pp. Vol. Y. No. “Zippered Polygon Meshes from Range Images”. P. Image and Vision Computing. Laurendeau. W. Vol. Proceedings of SIGGRAPH. K. "High-Accuracy Stereo Depth Maps Using Structured Light. Linney. J. D. G. D. A. Sahoo. 15. Proceedings of the IEEE. No. Wang and M. F. 11-18. 1. pp 669-680. IEEE Trans. Vol. Roth. J. Journal of Engineering for Industry. Will and K. de St. C." IEEE Computer Vision and Pattern Recognition. IEEE Transaction on Pattern Analysis and Machine Intelligence. Pennington. pp. Allen. Menq. “Featured-based Reverse Engineering of Mechanical Parts”. Vol. Canada: pp. Moss. C. Germain. S. 6. 60. 57-66. Turk and M. “Grid Coding: A Novel Technique for Image Processing”. M. Glassner (ed). P. 191–200. 311-318. 1972. 6.References 53  J. 195-202. Vol. 1. Grindrod. 1987. Szeliski. “Localization of 3-D Objects Having Complex Sculptured Surfaces Using Tactile Sensing and Surface Description”. R.
Vol. 1999. The American Heritage Dictionary of English Language. 2000. Xiong. Zhang. “Harmonic Maps and Their Applications in Surface Matching”. Fourth Edition. D. Hebert. “Iterative Point Matching for Registration of Free-Form Curves and Surfaces”. 524–530. 1994. 139.     . Zhang and M. Vol. “Computer Aided Measurement of Profile Error of Complex Surfaces and Curves: Theory and Algorithm”. Z.References 54  Y. pp. 472-475. International Journal of Machine Tools and Manufacturing. 2003. No. Journal of Materials Processing Technology. pp. 3. “Research into the engineering application of reverse engineering technology”. 13(2):119–152. 30. L. Zhang. pp 339-357. International Journal of Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’99). 1990. Y. Houghton Mifflin Company.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.