This action might not be possible to undo. Are you sure you want to continue?
Project in Lieu of Thesis presented for the Masters of Science Degree The University of Tennessee, Knoxville
Ngozi Sherry Ali May 2005
I would like to first give thanks to Dr. M. Abidi for allowing me the opportunity to join the IRIS lab as a graduate research assistant. His support and patients has allowed me to complete one of my many life goals. Thanks for giving me the chance to earn a degree of higher education and to reach heights that I knew I could reach. Saying “thank you” will never be enough to repay you for all you have done. Again, I thank you for everything and I am very appreciative. I would like to thank Dr. Koschan for advising me on all my work and projects. I value the guidance that was giving to me. Thanks for being the best advisor a student could ask for. I would also like to thank Dr. Page for always being there when I needed. There was never a time when I was in need that you did not offer you assistance. The IRIS Lab has been my family since I first arrived in August of 2003, I thank all that is apart of the Lab. In closing, I would like to thank Vicki Courtney-Smith and Kim Cate for always making sure I had all the little things that I couldn’t get myself. Special thanks go to Rangan and Chris Kammerud for helping me with my research and class work. Thanks to my close friends Sharon Sparks and Syreeta Dickerson for helping me through my first year of school with the support and encouragement that they gave me. Lastly, thanks to my family for always supporting me and my endeavors. You all are the driving force in my life and career. Without your love, none of this would matter.
The automotive industry has an increasing need for the remanufacturing of spare parts through reverse engineering. In this project we will review the techniques of laser scanning and structured lighting for the reverse engineering of small automotive parts. Laser Range Scanning is the use of a CCD camera that captures the profile of the laser as it passes on an object. Structured light is the projection of a light pattern on an object also with the use of several cameras to obtain a profile of the object. The objective of the project is to be able to generate part-to-CAD and CAD-to-part reconstruction of the original for future usage. These newly created 3D models will be added to the IRIS 3D Part Database.
..............................................................................4 2.................3 Single Views and Data Segmentation.... 36 Comparison: IVP Laser Range Scanner and Genex 3D FaceCam Data.....................................3 2................................... 25 5 RESULTS AND DISCUSSIONS ...........1 3....................... 7 Non-Contact Data Acquisition Techniques ..................... 51 . 28 5............................................ 19 4......................... 28 IVP Laser Range Data Results...................................... 15 Multiple View Integration and Registration ...........................................5 Contact Data Acquisition Techniques ....... 17 Post-processing Registered Images.................................. 10 General Constraints of Data Acquisition Techniques........................................................................ 49 REFERENCES....... 5 2........................................ 21 Genex 3D FaceCam Profiling System ........................................... SYSTEM DESCRIPTIONS AND SETUP .................3 Part Identification..............................2 IVP Range Scanning Profiling System .........................................2 3................................................................ 12 3 SURFACE RECONSTRUCTION ..................................................... 15 3......ii TABLE OF CONTENTS 1 2 INTRODUCTION..............................................................................1 5.......................................1 2............................1 5.......................................................................... 45 6 CONCLUSION .................................2 2........... 1 ACQUISITION CLASSIFICATIONS AND RELATED WORKS ...........................2 5.................1 4.............................. 8 Reverse Engineering Applying Non-Contact and Hybrid Techniques..................................... 6 Reverse Engineering Applying Contact Techniques .. 31 Genex Structured Light Data Results ............................................................................................................................................... 21 4..........
...........4: CAD model images of the ramp (a) left/front view... (c) ramp measurements .........1: Principle of a laser triangulation system.. 17 Figure 3.......................... (d) front/ right view of the ramp................................... (a) reconstructed top view (b) side view reconstruction............. 30 Figure 5..... (c) bottom view................ 30 Figure 5..... 2 Figure 1................ (a) Point cloud information...........6: Reconstructed bottom surface of the water pump..................................... 35 ....1: Photos of the ramp part (a) front and side views......... 24 Figure 4............. (b) Sequenced single view range images of the bottom surface of the water pump generated using our laser range scanner .. (b) right side view...............................................2: Textured single view range images generated from the structured lighting system........................ 26 Figure 4.............................. 33 Figure 5..5: 3D Ramp CAD models... 4 Figure 2. ............................. 25 Figure 4.......7: Reconstructed views of the water pump.................... 23 Figure 4...2: The sequence of steps required for the reconstruction of a model from multiple overlapping scans.............. (b) side angled view of the top of the IVP Range Scanner............... (b) back/left side view............................................................... 27 Figure 4.......... center and right camera photos of the object.............................................1: Classification of data acquisition techniques used in contact and non-contact approaches for reverse engineering systems.....3: Photos of the pulley arm...... 34 Figure 5........................................................................................................................................ (a) front view of the IVP Range Scanner......6: Genex 3D FaceCam System....... (a) right view...1: Flowchart for basic transformation phases of reverse engineering............. (b) front view of the ramp........................................................... (a) and (b) side view range images...............8: Genex 3D FaceCam User Interface screen that displays the left.......2: Photos of the water pump part (a) top and side views (b) bottom view.............7: Distance specification for data collection ...............8: Cleaned point cloud data of the side views of the water pump taken with the IVP Range Scanner....................................................................3: Point cloud information of the side view of the water pump. (d) back view .....iii LIST OF FIGURES Figure 1.............................. (c) back/left view of the ramp.......... 27 Figure 5............... (c) bottom view........................... 29 Figure 5.......... (a) is the top view (b) is the bottom view .............. 22 Figure 4.................................... ................ 31 Figure 5................................ (d) right/top/ front side view. (c) right view and (d) left view . 24 Figure 4......... 19 Figure 4................. (c) registered range image of the two side views..................... (a) left/front side view.................5: Structured light grid pattern projected on the ramp with a neutral tan background....................4: IVP Range Scanner User Interface...2: Equipment setup for IVP Range Scanning System.1: (a) Original image of the water pump.................................. Arrangement for the IVP Smart Vision Camera and Laser............................ (b) back and top views............... (a) left view of the ramp.... (c) smooth reconstructed mesh model............ (b) left view....... 16 Figure 3.............. (b) Solid mesh model.................. 32 Figure 5...........3: Calibration grid for the IVP Range Scanner .. 5 Figure 3.......................
.........................................................iv Figure 5................................................. 40 Figure 5..................... (a) back view........ 44 Figure 5................ (d) left/angled view............................... (e) front/angled view and (f) top/left angled view........... (b) right/ back side view..... (c) front view and (d) left view............ 45 Figure 5....22: Simulated ramp........... 39 Figure 5........................16: Genex System reconstructed bottom surface of the water pump.............. 35 Figure 5............................. (a) top view....................... 39 Figure 5........................................................ 43 Figure 5........................... (a) right view....... (b) point cloud CAD model of the top side .... (a) point cloud data..... (a) surface reconstruction with missing data and (b) complete surface reconstruction with filled holes ..............9: Point cloud model of the water pump (a) CAD model showing height variations (b) top view. .............14: Water pump placed in box for neutral background........ (c – f) Final 3D CAD model views of the water pump.............................................................13: Complete final 3D model of the ramp (a) front view (b) back view (c) side view (d) bottom view ................................... (c) right/angled view............................ (c) side view and (d) top view.21: Standard deviation of the IVP ramp overlapped with the Genex ramp model ................................. unsmooth surface textured and (d) smooth textured surface... 41 Figure 5. (c) back view .... (b) back view............................................................................15: Textured water pump images using the Genex System....... (b) left view.....................................18: Genex System reconstructed views of the water pump....................................... (b) point cloud CAD model of the bottom side........................ 37 Figure 5............................... (c) left view and (d) front view .......................................................19: (a) and (b) Water pump photo image.. 46 Figure 5........................ (b) solid mesh model.......... (b) view of the bottom surface...........17: Genex System reconstructed top surface of the water pump................20: Standard deviation calculations of the IVP ramp overlapped with the Genex ramp..................... ............................. This picture shows the measurements of the ramp........ 36 Figure 5........... 47 ..........11: Textured ramp images using the Genex System............ 38 Figure 5........... (a) right view.............................. 41 Figure 5.10: Pulley Arm (a) photo of the side view profile of the pulley arm..........12: Genex System reconstructed views of the water pump..
v LIST OF TABLES Table 1…………………………………………………………………………………..48 .47 Table 2…………………………………………………………………………………..
Three-dimensional (3D) computer vision uses two-dimensional (2D). and operation of efficient and economical structures. images to generate a 3D model of a scene or object. It can also be defined as the process or duplicating an existing component by capturing the components physical dimensions. manufacture. processes. This type of engineering is more commonly known as Forward Engineering. Simple meaning that. computer vision applications have been tailor to compete in the area of reverse engineering. There has been a mandatory need for 3D reconstruction of scenes and objects by the manufacturing industry. military branches and research facilities. It takes an existing product. computer vision requires a combination of low-level image processing to enhance the image quality (e. medical industry. Engineering  is described as “the application of scientific and mathematical principles to practical ends such as the design. for modification or reproduction to the design aspect of the product. This fast prototyping is done .Chapter 1: Introduction 1 1 INTRODUCTION Engineering is a growing field that continues to evolve to suit the rapid changes of the 21st century. Computer vision is a computer process concerned with artificial intelligence and image processing of real world images. This method is more commonly referred to as Reverse Engineering. Reverse engineering is the opposite of forward engineering. and systems”.g. machines. you do not use up valuable time in assembly or doing a specific task. When referring to technology. Engineering fields are constantly improving upon current designs and methods to make life simple and easier. With this knowledge. Easy meaning how many times you will have to do the process or task. remove noise. Manufacturing industry utilizes reverse engineering for its fast rapid prototyping abilities and accuracy associated with the production of new parts. An emerging engineering concept is utilizing forward engineering in a reverse way. simple and easy can be directly related to fast and accurate. Typically. When we think of engineering we think of the general meaning of designing a product from a blue print or plan. Reverse engineering is usually undertaken in order to redesign the system for better maintainability or to produce a copy of a system without access to the design from which it was originally produced. increase contrast) and higher level pattern recognition and image understanding to recognize features present in the image. and creates a CAD model.
All 3D-based machine vision systems ultimately acquire and operate on image data.1: Flowchart for basic transformation phases of reverse engineering There are many different approaches to acquiring 3D data of objects of various structural shapes. This flowchart can be characterized as a generic basic principle for reverse engineering. process and analyze the data acquired. Our system uses range and intensity images of objects as input. Data Capture 2. One of our laboratory’s current focuses is reverse engineering or 3D reconstruction of objects and scenes from real world data. smooth 3D models. Acquisition can be based on collecting the Z-axis data using linear area. point detectors. This requires a strong. Military branches also utilize reverse engineering to perform inspection task that are associated with safety. The steps shown often overlap during the process of each stage. 1. which determine the process of building a complete 3D model from range and intensity data. In addition to these tasks. The goal of reverse engineering an object is to successfully generate a 3D CAD model of an object that can be used for future modeling of parts where there exists no CAD model. transformed and generated. listed in Figure 1. laser radar laser scanning techniques. These steps. which are free of noise and holes. robust image acquisition system that can acquire data with a high level of accuracy in a sufficient time frame.1 show the format of how range image data is acquired. We want to generate clean. The output is transformed data that is represented as 3D reconstructions of geometric primitives. or other approaches. There are several building blocks or steps. 3D CAD Model Figure 1. These systems incorporate the computer power to manage. Data Segmentation 3.Chapter 1: Introduction 2 through the use of CAD model designs for inspection purposes. make decisions relating the data to the application without .
Traditional practices use CMMs. When implementing a non-contact measurement solution. After all the steps are complete a final 3D CAD model is generated. which are coordinate measuring machines that have a touch probe to model the surface for inspection. but have resulted in reconstructions that have errors when dealing with more complex structures. The first system is the IVP Range Laser Scanning System and the second is the Genex 3D FaceCam System. Our approaches will be using data acquisition systems that are fairly robust to noise and yield high accuracy measurements. we will compare the modeling aspects of this system for reverse engineering of automotive parts to the laser range system. triangulated models) and parametric surface (e.Chapter 1: Introduction 3 operator intervention. outlier or erroneous background information is eliminated. Surface smoothing and multi-view registration are included in data integration. Traditional processes for reverse engineering of objects and structures from 3D datasets have been initial data (e. while the yellow and orange highlights the data pre and postprocessing steps and the final outcome is a 3D CAD model. The blue area best describes the data capturing section. the end user has a large array of commercial systems to select from. Surface smoothing is an additional feature to eliminate noisy data and make the surface of the object more uniform in texture. While the approaches are similar.g.2 is a more detailed description of Figure 1. data such as the noise. This can be performed both before and/or after several views of the part are merged. Although both systems primary focus is reconstruction of real world objects and scene. Other errors can also consist of incorrect relative positions of the object. Data segmentation and integration are discussed in depth in section 3. Figure 1. To emphasis the use of this project we have chosen two different commercial systems with two different approaches to modeling 3D objects using vision based technology. Figure 1. These approaches have been successful for simple parts. In data reduction. the steps taken may vary for each system. Although the structured lighting system is not designed for reverse engineering use. This can be improved through the integration of laser range scanning. smooth models of the part. we will investigate the limitation of both systems. Typical errors arise from noisy data or missing data from the surface of the part. Today’s industries are moving toward the improvement of better accuracy and faster inspection time. This characterizes what is meant by the term ‘‘3D-based machine vision”.2 describes the data flow of our approach applied to both systems. Industries are looking for a method to improve upon these errors and migrated toward a fast efficient way of modeling parts for inspection purposes. In the data segmentation stage several steps are taken to generate noise free.g.1. The two different techniques consist of using laser lighting and structured lighting techniques. quadratic surface) driven. . Outliers are false data points that are captured during acquisition.
Following.Chapter 1: Introduction 4 1. Finally.2: The sequence of steps required for the reconstruction of a model from multiple overlapping scans. Data Capture Acquisition of Range Images 2. Section 2 of this paper. . we will conclude this paper with the implementation of our procedure for the experimental results. Data Segmentation Pre-processing Data Reduction 3. is section 4 that discusses the applications and technique of our implemented systems. A comparison and summary of the limitations will be expressed in the conclusion section of the paper. 3D CAD Model Figure 1. Data Integration Post-processing Multi-view Registration Surface Smoothing Noise Filtering Hole Filling Next Best View Plan 4. Section 3 will discuss surface reconstruction for 3D models. will discuss the related merits and methods of reverse engineering and techniques.
Data acquisition systems are constrained by physical considerations to acquire data from a limited region of an objects’ surface.1: Classification of data acquisition techniques used in contact and noncontact approaches for reverse engineering systems.1 classifies the types of application used for acquiring 3D data into contact and non-contact methods. multiple scans of the surface must be taken to completely measure a part.Chapter 2: Acquisition Classifications and Related Works 5 2 ACQUISITION CLASSIFICATIONS AND RELATED WORKS An important part to reverse engineering is data acquisition. Figure 2. After reviewing the most important measuring techniques. Data Acquisition Methods Non-contact Methods Tactile Contact Methods Magnetic Acoustic Optical CMMs Robotic Arms Laser Triangulation Time-of-Flight Stereo Analysis Structured Lighting Interferometers Figure 2. the related merits and difficulties associated with these methods are discussed. . Therefore.
Sampling basis is the only way a part can be inspected using a CMM. These machines can be fitted with a touch probe. CMMs also show difficulties in measuring parts with free form surfaces. There are also external factors that affect the accuracy of a CMM.1. . This needs to be done in conjunction with the idea that the part has free-forming surfaces. holes can be inflicted on the surface. There are many different other robotic devices which are used because of their ability to have less noise and have a desirable accuracy. The disadvantages of CMMs having contact to the surface of an object can damage the object. and used as a tactile measuring system. as mentioned before.Chapter 2: Acquisition Classifications and Related Works 6 2. It is considered a contact type method that is NC-driven and can program sampling of points for predefined features efficiently . The two most commonly known forms are Coordinate Measuring Machines (CMMs) and mechanical or robotic arms with a touch probe sensing device. Xiong  gives an in depth discussion of measurement and profile error in tactile measurement. However. For CMM.1 Contact Data Acquisition Techniques There are many different methods for acquiring shape data. nearly noise-free data. but like the CMM. A 3-axis milling machine is an example of a mechanical or robotic arm. geometric complexity increases the number of points required for accurate measurements. The main ones are temperature. The part might have indentions that are too small. it is not very effective for concave surfaces. Flexibility of parts makes it very difficult to contact the surface with a touch probe without creating an indentation that detracts from the accuracy of the measurements. These machines can be programmed to follow paths along a surface and collect very accurate. Tactile methods represent a popular approach to shape capture. CMMs are often used when high precision is required. they are the slowest method for data acquisition. Sahoo and Menq  use tactile systems for sensing complex sculptured surfaces. The reason being is if the surface texture is soft. The time needed to capture points one by one can range from days or sometimes weeks for complicated parts. vibration and humidity . as shown in Figure 2. The ability for obtaining large amounts of point data from the parts surface quickly for complete inspection needs to be the number one quality of an inspection device. Butler  provides a comparison of tactile methods and their performance. There are disadvantages when using a CMM or robotic arm to model surfaces of parts.
It states the resulting models can be directly imported into featured-based CAD systems without loss of the semantics and topological information inherent in featured-based representations.  for quantitatively evaluating the accuracy of the models using the feature-based modeling approach. even when the original 3D sensor data has substantial errors. The basic principles of reverse engineering was applied to the design and manufacturing of the die of a diesel engine. The die’s geometric shape is measured and data is acquired using a CMM in conjunction with KUM measurement software that has a linear scan mode. three spheres to three spheres. or three planes to three planes . the registration of two different point clouds is performed by matching three points to three points. The first technique and method we visit is that of Thompson at el. They have identified a feature based approach that they state produces highly accurate models. parts were used only if they had the original CAD model. New CAD models were generated using the REFAB (Reverse Engineering – FeAture-Based) system. the measurement data is acquired. The geometric differences between the original and new generated model was computed. Their research describes a prototype of a reverse engineering system which uses manufacturing features as geometric primitives for mechanical parts. A non-contact digitizer measured the surface points. The second technique we visit is that of Yu Zhang .Chapter 2: Acquisition Classifications and Related Works 7 2. . It assumes that the models are already generated into CAD model formation. The part was machined out of aluminum using a 3-axis NC mill. In the feature based approach. Various individuals and groups have developed new techniques. The system employed is built with a coordinate measurement machine and CAD/CAM software. This research claims to have advantages over the current practice of ordinary CMMs. The system designs a model composed of mechanical features from a set of 3D surface points that could be defined by users. This research focuses on the engineering application of reverse engineering.. which have been improvements to the current existing techniques available. Their method is geared toward reverse engineering of mechanical parts. The results from Thompson et al.. rather than using triangulated meshes or parametric surfaces patches. Further details and results from their work can be read in reference . The process described in the paper is the object digitization and CAD model reconstruction to NC machining. The number of points measured is determined automatically by the CMM according to the curvature change of . By scanning the physical object.2 Reverse Engineering Applying Contact Techniques Reverse engineering is a growing industrial market for manufacturing and development. Their main innovation was to use the features to fit scanned data.
The die is finally machined by the NC machine tool using the created CAD model. Then the data is filtered out and processed in a visualized way.Chapter 2: Acquisition Classifications and Related Works 8 the surface. but lasers are the most common. Laser Triangulation is a method. structured lighting and stereo analysis. interferometers.. which uses location and angles between light sources and photo sensing devices to deduce position. This section will discuss the various principles of each method. 2. Moss et al. Various different highenergy light sources are used.  present a detailed discussion of a classic laser triangulation system used to capture shape data from facial surfaces. Usually a machining tracing process results in a structured point of sequences with a large number of points and a line structure. an appropriate analysis must be performed to determine the positions of the points on the objects surface. The processed data is directly used for the creation of the die CAD model. usually a video camera. There are five important categories of optical methods: laser triangulation. The CMM that is used can measure about 1600 points for each scanned curve. A photosensitive device. Motavalli et al. The light source and the camera can be mounted on a traveling platform which then produces multiple scans of the surface. In the case of contact and non-contact. First. These scans are therefore relative measurements of the surface of interest. A high-energy light source is focused and projected at a pre-specified angle at the surface of interest. After the CAD model of the die is complete.3 Non-Contact Data Acquisition Techniques Non-contact methods use light. This is because they have relatively fast acquisition rates. Optical methods of shape capture are probably the broadest and growing in popularity over contact methods. the position of a surface point relative to a reference plane can be calculated..  presents a reverse engineering strategy using laser triangulation. This is measured on the tactile point. The accuracy is determined by the resolution of the photosensitive device and the distance between the surface and the scanner. the format of the measured data is transformed into an acceptable format for the software used. The result of Zhang’s  system is a selfdeveloped program to realize the transformation of the format of the measured data information from the CMM and KUM. time-of-flight. The use of laser triangulation on a . senses the reflection of the surface and then by using geometric triangulation from the known angle and distances. Triangulation can acquire data at very fast rates. sound or magnetic fields to acquire shape from objects. Each method has strengths and weaknesses that require the data acquisition system to be carefully selected for the shape capture functionality desired. the NC machining process planning can generate the location for cutting the manufacturing application.
The final optical shape capture method of interest is stereo image analysis. and in stereo analysis the relative locations of landmarks in multiple images are related to position. Interferometer methods measure the distance in terms of wavelengths using interference patterns. intensity patterns within images can be used to determine coordinate information. However. Jarvis  presents an in-depth article on time-of. Structured lighting involves projecting patterns of light upon a surface of interest and capturing an image of the resulting pattern as reflected by the surface. but the analysis to determine positions of data can be rather complex. . Instead. while most reverse engineering applications distances are in the centimeter to meter range. Measuring distance by sensing time-of-flight of the light beams emitted is the way a ranging system works. Structured lighting can acquire large amounts of data with a single image frame. The model is modified until the shaded images match the real images of the object of interest. Practical methods are usually based on lasers and pulsating beams.flight range finders giving detailed results and analysis. These contour lines are captured in an image and are analyzed to determine distances between the lines. Another stereo image analysis approach deals with lighting models. Active methods are distinguished from passive methods in that artificial light is used in the acquisition of data. the analysis does not rely on projected patterns. This distance is proportional to the height of the surface at the point of interest and so the coordinates of surface points can be deduced. where an interference pattern is projected onto a surface producing lighted contour lines. These references give a broad survey of methods. Correlation of image pairs and landmarks within the images are big difficulties with this method and this is why active methods are preferred. in laser range finders.. Will and Pennington  use grids projected onto the surface of objects to determine point locations. In practice. This is similar to structured lighting methods in that frames are analyzed to determine coordinate data. where an image is compared to a 3D model. Moring et al. approaches and limitations of triangulation. the time-of-flight is used to determine the distance traveled. This can be a very accurate method of measurement since visible light has a wavelength of the order of hundreds of nanometers. typically. A popular method of structured lighting is shadow Moire. Finally. In principle.  describe a range finder based on time-of-flight calculations. other parts of the electromagnetic spectrum could also be used. For example. a high-energy light source is used to provide both a beam of monochromatic light to probe the object and a reference beam for comparison with the reflected light. The image must then be analyzed to determine coordinates of data points on the surface. The article presents some information on accuracy and performance. stereo pairs are used to provide enough information to determine height and coordinate position. Wang and Aggarwal  use a similar approach but use stripes of light and multiple images. This method is often referred to as a passive method since no structured lighting is used.Chapter 2: Acquisition Classifications and Related Works 9 coordinate measuring machine is presented by Modjarred .
The speed with which the phenomenon operates as well as the speed of the sensor device determines the speed of the data acquisition. Acoustic interference or noise is often a problem as well as determining focused point locations. a line laser and a three-axis motion stage. either light. all measuring methods must interact with the surface or internal material using some phenomenon. magnetism or physical surface contact. 2.Chapter 2: Acquisition Classifications and Related Works 10 The final types of data acquisition methods we will examine are acoustic. MRI (magnetic resonance) activates atoms in the material to be measured and then measures the response. Magnetic resonance is used in similar applications to ultra-sound when internal material properties are to be measured. magnetic. A trigger allows the user to only record specific point data once the stylus is positioned at a point of interest. They present a measurement system that includes the combination of two CCD cameras. The second may consist of some other form of a non-contact technique such as software and laser-based technology integrated as one system. . sound. where a magnetic field touches the surface and a hybrid of both contact and non-contact. The first type usually consists of the coordinate measuring machine and integrated laser based technology. Hybrid modeling systems are a combination of contact and non-contact systems. Acoustic methods have been used for decades for distance measuring. They formed an optical non-contact scanning setup that works with the mathematical method of direct shape error analysis for engineering purposes. Magnetic field measurement involves sensing the strength of a magnetic field source. where a sound source is reflected off a surface and then distance between the source and surface is determined knowing the speed of sound. Matching the images of the free-form surfaces with sufficient efficiency and accuracy is the final result. Hybrid based applications will be discussed in the next section. Magnetic touch probes are used which usually sense the location and orientation of a stylus within the field. The profile measurement of free-form objects can be analyzed. Sonar is used extensively for this purpose. Automatic focus cameras often use acoustic methods to determine range. where sound is reflected from a surface. The method is essentially the same as time-of-flight. The sensor type selected also determines the amount of analysis needed to compute the measured data and the accuracy. Dynamic imaging is used extensively in ultra-sound devices where a transducer can sweep a crosssection through an object to capture material data internal to an object. To conclude this section.4 Reverse Engineering Applying Non-Contact and Hybrid Techniques The first non-contact techniques that we explore are that of Fan and Tsai . They can also be a combination of NC coding and laser scanning techniques.
A hybridtriangulation based hand held system integrated with a coordinate measuring machine is used for this approach. Therefore. The B-spline surface construction and the DFPM algorithm are the foundation of their algorithm. The water pump was scanned using both a contact and non-contact system. This algorithm is an adopted variation of the shape error algorithm. This method was used to describe the first set of measurement point and to generate reconstructed multiple patches of the surface. The results from Clark are of that modeled using a water pump.Chapter 2: Acquisition Classifications and Related Works 11 Fan and Tsai research adopted the bicubic uniform B-spline interpolation approach for the shape error analysis method. The first hybrid-based technique reviewed is that explored by Jim Clark . He demonstrates that noncontact techniques in conjunction with advanced surfacing and inspection software yield sufficient results for the mechanical design process. and the DFPM (Davidon-Fletcher-Powell Method) algorithm. They developed a computer program to analysis the shape error with respect to the surface that is referenced. . then no particular frequencies can be blocked out. This is because it might be carrying the information required to measure the object. Clark summaries by writing that if a system projects laser light then the unwanted frequencies can be filtered out. Further reading of his work can be viewed in reference .. If the system projects white light. The results of their approach are of that used on a free-form surface and a car rear-view mirror case. Further reading on their results can be viewed in reference . Chow et al. They report the rigid body transformation from the optimal shape error results and the optimal parameters using the DSEAM. white light area based systems will be limited in their ability to measure ambient lighting verses laser based systems. Based on this principle they have developed an algorithm called the direct method or DSEAM (Direct shape error analysis method). They evaluate the feasibility of using concurrent engineering and reverse engineering methods with the data from laser scanning to remanufacture complex geometrical parts. The results were compared based on the surface quality and the point cloud data obtained. The shape error analysis main function is to sum the squared nearest distances . Refer to Fan and Tsai  for detailed information on the DFPM algorithm. Whether or not the system can measure the ambient lighting depends on the projected color of light on the object. The effects of ambient lighting are discussed for non-contact systems. They have reported a reduction in the shape error from their technique compared to the initial shape error of the objects. The technique focuses on modeling complex and free-form shapes of mechanical objects by comparing contact and non-contact methods for digitizing the surface.  developed an integrated laser-based reverse engineering and CAM machining system called RECSI (Reverse Engineering and CAM System Integration).
b. To summarize this section. i. Fan and Tsai  implemented a non-contact system that utilizes CCD cameras and laser triangulation for reverse engineering. Chow et al. c. accurately determine parameters such as camera points and orientations. the major ones being: a. 2.. Chow et al. g. h. The comparison table of the results and the time required to complete each step can be view in their paper . f. developed and implemented a process planning system that interfaces with a tightly coupled CAD modeling system and CAM tooling path. They reported their findings of the errors of the overall integrated system were close to the calculated errors in the results of the reverse engineering feasibility study.. and second. . Any sensing must be calibrated so as to. The samples were performed to evaluate the accuracy and efficiency of their concurrent reverse engineering system. first. to model and allow for as accurately as possible systematic sources of error. He also discussed some of the issues regarding the implementation of such systems for manufacturing purposes. non-linear electronics in cameras. and similar sources. The second phase is the actual development of the system.  results are that of the comparison between the original parts and the duplicated parts. Clark  implemented a non-contact system that works in conjunction with surfacing and inspection software. They demonstrate the accuracy and efficiency of their laser-based reverse engineering system.5 General Constraints of Data Acquisition Techniques There are many practical problems with acquiring useable data. e. Most of the papers cited . Systematic sensing errors can occur through lens distortions. d. The goal of the system is to show that an integrated reverse engineering and CAM machining system can make the remanufacturing process more automatic and efficient.Chapter 2: Acquisition Classifications and Related Works 12 The first phase of their research demonstrates that laser scanning and CAD model reconstruction can duplicate aircraft structural components accurately and efficiently within a given tolerance. Calibration Accuracy Accessibility Occlusion Fixture (placement) Multiple views Noise and incomplete data Statistical distributions of parts Surface finish Calibration is an essential part of setting up and operating a position-measuring device. The system utilizes NC coding generated from the software.
Finally there are situations where only parts of a certain surface can be measured. Statistical distribution of parts deals with the fact that any given part. after. Also central gravity of the part makes most surfaces of the object difficult to scan. acoustic and magnetic scanners may also have this problem. which in some cases may be desirable. Noise can be introduced in a multitude of ways. intersections and patching holes are given in the last part of the paper. which is scanned. because of the nature of optical and even tactile scanning. but we need to reconstruct the whole surface from just the visible parts. but all methods of data acquisition require accurate calibration. from extraneous vibrations. Accessibility is the issue of scanning data that is not easily acquired due to the configuration or topology of the part. the data close to sharp edges is also fairly unreliable. There are many different filtering approaches that can be used. There are times when the noise should not be eliminated at all. Further ideas on surface extensions. etc. though. A similar problem is restoration of missing data. The geometry of the fixture used. is often an unavoidable step in reverse engineering. This is partly necessary due to the above-mentioned inaccessibility and occlusion problems. or during the model building stage. there are missing parts or parts obscured by other elements. Distance from the measured surface and accuracy of the moving parts of the scanning system all contribute to the overall measurement error. This is primarily a problem with optical scanners. As well as self-occlusion. Moreover. that this also destroys the "sharpness" of the data i. Noise filtering. Multiple views introduce errors in acquired data because of registration problems (see more details later). Occlusion is the blocking of the scanning medium due to shadowing or obstruction. Multiple scanning devices are one approach to obviate this problem. but note. Noise elimination in data samples is a difficult issue. However. specular reflections. typically sharp edges disappear and are replaced by smooth blends. Optical scanners' accuracies typically depend largely on the resolution of the video system used. When reverse engineering methods attempt to reproduce a given shape. An important question is whether to eliminate the noise before. becomes a part of the scan data. Through holes are typical examples of inaccessible surfaces. Elimination of fixture data is difficult and often requires multiple views. but in other cases may lead to serious problems in identifying features. occlusion may also arise due to fixtures-typically parts must be clamped before scanning.Chapter 2: Acquisition Classifications and Related Works 13 present some discussion of accuracy ranges for the various types of scanners.e. This usually requires multiple scans but can also make some data impossible to acquire with certain methods. the tolerance distribution of the scanned part must be considered. This gives rise to multiple part scans and the averaging of the . only represents one sample in a distributed population.
However. and indeed. it may be somewhat impractical to attempt to sample many parts from a population. despite the practical problems discussed.Chapter 2: Acquisition Classifications and Related Works 14 resulting data. etc. such a device does not exist at present. Tactile or optical methods will produce more noise with a rough surface than a smooth one. the measurement is adaptive. But. however.e. the process of recognition and model building can begin. i. it is possible to obtain large amounts of surface data in reasonably short periods of time even today using the methods described. Unfortunately. The final issue we bring up is surface finish of the part being measured. Smoothness and material coatings can dramatically affect the data acquisition process. The data is captured in one coordinate system with high accuracy. makes these steps fairly difficult as will be seen in the following sections. Possibly. Reflective coatings also can affect optical methods. Imagine an ideal scanner: the object is 'floating' in 3D spaces. The imperfect nature of the data. so it is accessible from all directions. with no need for noise filtering and registration. Once the measured data is acquired. particularly inaccuracy and incompleteness. often only one is available. more points are collected at highly curved surface portions. .
1 Single Views and Data Segmentation An iterative process for 3D reconstructions of surfaces in static environments is defined by the following steps (see also Figure 1. These images also show the texture of the part or object that is being scanned. 4. 3. The structured lighting system that we use for our project generates a different set of images when performing the image acquisition process. The black area in between the blades of the water pump show occluded areas. which depicts this process): 1.2. In this section we will discuss the procedures to generating a successful 3D model of an object from single and multiple views. 2.1 shows a sequence of range images obtained from the IVP Range Scanner.Chapter 3: Surface Reconstruction 15 3 SURFACE RECONSTRUCTION Obtaining a surface representation of objects and scenes has always been one of the most challenging and fundamental problems of 3D computer vision. which is a line. The image acquisition process yields a number of selected range images.2 below is an example of the type of images that are generated from the structured lighting system used in this project. The first scans from both systems are 2D representations of the real object. The images obtained from this system are color range images. the CCD camera captures the scene. 3. We will also discuss various surface reconstruction algorithms that have been developed and employed. Although both system produces different types of images. Although both the systems generate 2D views of the part. These are depth range images. The result is a grey scale-image. they are all part of the acquisition step of the process. The range images that are generated are of the view angles that the user has positioned the part for capture. Figure 3. they are still considered single views. The purposes of taking multiple views are to eliminate the missing data from the water pump. . Figure 3. In image acquisition. They are single views because one view cannot complete the reconstruction of the object. Acquiring range images of the part Pre-processing acquired data (data segmentation) Data post-processing (data integration) Final 3D CAD model Range image acquisition is the first step of the process. which shows the intersection between the laser plane and the object.
(b) Sequenced single view range images of the bottom surface of the water pump generated using our laser range scanner .1: (a) Original image of the water pump.Chapter 3: Surface Reconstruction 16 (a) (b) (c) (d) (e) (f) Figure 3.
we require a geometric model of the whole object’s surface. As mentioned before pre-processing includes reducing erroneous data. (a) left view of the ramp.2 Multiple View Integration and Registration For all objects and parts to be scanned. Pre-processing is more commonly referred to as cleaning of the collected data. (d) front/ right view of the ramp. we would have the part “floating” in 3D space . (c) back/left view of the ramp. as stated above. Pre-processing the range images is the step proceeding data collection. filtering noise and filling holes that may have occurred as a result of occlusions. Ideally. Pre-processing is applied to the single views individual before they are integrated and registered together.Chapter 3: Surface Reconstruction 17 (a) (b) (c) (d) Figure 3. The additional steps of the process are described in the next few sections.2: Textured single view range images generated from the structured lighting system. (b) front view of the ramp. 3.
it is important to decide how many scans will be taken of the part or object. When reconstructing objects we want to have overlapping views of the object. The overlapping views should consist of the same area of the object being scanned. so part of it is inaccessible to the scanner. Thus. 9-12. Automatic feature matching for computing the initial alignments is an active area of research (recent work includes [3. the longer the whole scanning process will take. Also. Special software is use to match similar features and points on the different surfaces scanned.3 c is the registration of both views overlapping. they are matched based on the distance computed using the ICP (Iterative Closet Point) algorithm in the software. After the points have been selected for matching. if the scanner is fixed in position n. the registration is a fairly easy process.3 b is another single view. In practice. or registered. In our system. generally. The overlapping red region shows how they are aligned in the same shell. Three or more points are matched based on similar corresponding features and feature location. it will be necessary to combine multiple views taken with the object placed in different orientations in front of the scanner. . This is commonly referred to as feature matching or extraction. 24. This is the second step of the post-processing stage for 3D reconstructions. The most general formulation of the problem that makes no assumptions on type of features (in the range and/or associated intensity images) and initial approximate registration is extremely hard to solve. Each range image is a dense sampling of the 3D geometry of the surface from a particular viewpoint. The goal of collecting multiple views of range images is to take these sets of range images and register the images. into a common coordinate system so that they can be integrated into a single 3D model. For this reason. This would be so that the scanner could move around the object from all sides to capture in a single coordinate system. Figure 3. This can be determined based on the size and material make up the part. The more scans that are taken of the part. at any one time. The individual range images must be aligned. The main purpose for overlapping the different views is to omit occlusion in the object by matching various similar features on the object. 35]). We want to take these sets of registered range images of the entire surface of the object and from these images produce a corresponding set of parametric surface patches. contains one single view of the water pump and Figure 3. and can be used as a starting point to avoid searching a large parameter space. Approximate position and orientation of the scanner can be tracked with fairly inexpensive hardware in most situations. the scanner may be attached to a coordinate measurement machine that tracks its position and orientation with a high degree of accuracy. Furthermore. In high-end systems. some views may have more details or some may have low resolution. Passive mechanical arms as well as robots have been used. For instance. Figure 3.Chapter 3: Surface Reconstruction 18 (in a fixed position with a fixed orientation).3 is an example of how our software registers two sets of range image view. 6. 27. registration may be performed by accurate tracking. it will be able to capture data from an even more limited region of the object’s surface.3 a. Figure 3. the object will have to rest on some surface.
3 Post-processing Registered Images After all the views of the object have been obtained. they can be merged together using various post-processing option offered in commercial software or by generated programs. 3. (a) and (b) side view range images. The registration of the views may take several tries to achieve the optimal aligning of the images that is desired.2. post-processing of range images includes surface . As mentioned above in Figure 1. (c) registered range image of the two side views. After the two desired views are registered to each other at the ideal feature locations. they are ready for post-processing.Chapter 3: Surface Reconstruction 19 (a) (b) (c) Figure 3.3: Point cloud information of the side view of the water pump.
surface merging. We are allotted three options. In our system. Rapidform2004 is 3D modeling software that we use to generate our complete 3D CAD models of our parts. This is the step before completing the 3D reconstruction of an object or part. we use special software to perform the post-processing of the individual views. Volumetric methods are also well suited to producing watertight models. In volumetric methods. Redundant overlapping triangles are then eroded for removal from the partial meshes. when performing the post-processing step. Surface-based methods create the surface by locally parameterization the surface and connecting each point to its neighbors by local operations .Chapter 3: Surface Reconstruction 20 smoothing and multiple view registration. line of sight error compensation is done by computing a scalar field that approximates the signed distance to the true surface . Volumetric-based methods are useful for very large datasets. they carve away the solid that lies between the scanner and each sampled data point. or bad aligned data. the registration process is made permanent. object definition can be obtained without holes in the surface. It is a volume element of a rectangular shape of the subject being imaged. Turk and Levoy  zippering approach works by triangulating all the range scans individually. volumetric merging and point cloud merging. The intersecting regions are locally re-triangulated and then trimmed to create one seamless surface. For example. The surface merging option merges shells of range views that have been aligned by the registration process into one united shell. In other words. The range images are used to carve out a spatial volume then. This polygon-merging tool helps you to merge scanned data with many holes and messy boundary. Overlapping shell regions between the two separate shells are removed and neighboring boundaries are connected together with newly added polygons. One of the built in functions in the software is for merging the different range image views into one united shell. There are two techniques that are commonly used for this process. Post-processing operations are often necessary to adapt the model resulting from scan integration to the application at hand. The partial connectivity that is implicit in the range images is made use of in other methods. This is based on a weighted average of distances from sample points on the individual range image scans. Solid modeling evolution from a series of range images can be demonstrated by Reed and Allen . The Volumetric merging option merges multiple shells into a single shell by allocating their geometry information to a reference voxel model with a volumetric method. The first is surfacebased methods and the second is volumetric methods. Volume based merging is useful when surface based meshing creates poor merge results. The vertex positions are then readjusted to reduce error. A voxel is word created from two words (vector and pixel) to describe the 3D space of a pixel-based image. With the data obtained from each range image. .
Consisting of a beam projector radiating the laser beam. the sensor is moved in the x. onto the surface so that several points can be acquired at once.1 IVP Range Scanning Profiling System There are a number of 3D laser scanners commercially available. a particular type is chosen. the different options fall under the category of contact and non-contact. Industrial settings however use systems that are active. Depending on the specific needs for design.1. laser-scanning devices can also be classified on those bases .Chapter 4: System Descriptions and Setup 21 4. we have chosen two different profiling systems for our project. 4. This description describes a static system that only measures points where the laser line and object meet. y and z directions. Too fully measure an object in 3D spaces. The first profiling system that we employ to reconstruct a 3D model of an object is the IVP Range Scanning System. The IVP Ranger SC386 is a laser triangulation scanner for range profiling using the MAPP family . This is done using some form of camera or light sensing electronics. A laser-type scanner radiates a line of laser beams. and observes the intersection of the object and laser through electronic cameras. called a stripe. As mentioned in section 2. Passive methods such as stereo or photogrammetric systems use only cameras. In that they project some form of illumination onto the object and measure the position of the illumination on the object . Both profiling systems will be discussed in the next two sections. Triangulation systems are often classified as being either active or passive . a point type laser scanner obtains only one point at a time. the laser scanner and a CCD camera sense the reflected beam from the surface. In contrast to the stripe. A typical triangulation sensor diagram is shown below in Figure 4. SYSTEM DESCRIPTIONS AND SETUP There are many profiling systems that can be used to capture data of objects for reconstruction. By sending laser beams radiated from the surface and received by CCD cameras. Laser stripe and point type are normally how laser beams can be categorized. The vast majority of 3D non-contact systems employ triangulation. A typical triangulation scheme projects a point or line (sheet) of laser light on an object. To emphasize the objective of this project and the flexibility of non-contact options. This is to fully cover the area of the object. This system uses a laser stripe for acquisition. the 3D laser-scanning device can acquire the surface information of the part. Based on the configuration of the machine.
Arrangement for the IVP Smart Vision Camera and Laser The IVP Laser Scanner consists of a thin laser light and a camera that are used to obtain the profile of objects. The camera and laser are fixed on a stable structure that moves in a horizontal direction. There are forty total points that must me identified.3 is an example of the calibration grid used to calibrate the IVP Range Scanner. Baseline distance Range distance Field of View Sensor (CCD Camera) Objects Figure 4. Figure 4. the system must be correctly calibrated before every successful set of scanned data. Calibration of the IVP Range Scanner involves identifying the correct world coordinate data for the system so that the measurements of the scanned object match both in real world data and transformed data. they move at the same speed. Because the laser and camera are fixed on the same belt.Chapter 4: System Descriptions and Setup 22 Smart Vision Cameras. Figure 4. The black box houses the motor for the system. The speed can be adjusted depending on the quality of scanning that is to be achieved. A thin laser light is projected onto the object and the CCD sensor from the camera detects the scan line (the peak of the reflected laser light). Figure 4.1 shows the angle placement of the camera to the laser light for the IVP Ranger System.1: Principle of a laser triangulation system. For data to be acquired using the IVP Range Scanner. The speed of the laser is controlled through the software that is associated with the system. To calibrate the IVP a calibration grid is used to number all the coordinate data points. In the .2 b shows the Smart Camera and motor in more detail. Figure 4.2 is the IVP systems equipment and configuration used for data acquisition. ranging from 0 to 39. The profiles are displayed as a set of range images.
(a) front view of the IVP Range Scanner. the camera image of the object is displayed along with the object profile and range image that is acquired.2: Equipment setup for IVP Range Scanning System. all the black dots must turn blue to be recognized by the sensor. Figure 4. (b) side angled view of the top of the IVP Range Scanner .4 is the user interface for the IVP Range Scanner. In this user interface window. (a) Camera Motor (b) Figure 4.Chapter 4: System Descriptions and Setup 23 calibration grid.
4: IVP Range Scanner User Interface During calibration of the IVP Range Scanner the lights are turned off so that the calibration grid can be viewed by the camera source. The object profile can be seen as .3: Calibration grid for the IVP Range Scanner Figure 4. Also the lights are turned out to obtain the correct object profile of the calibration grid.Chapter 4: System Descriptions and Setup 24 Figure 4.
structured lighting can be described as active triangulation. Active triangulation is a simple technique to achieve depth information with the help of structured lighting to scan a scene with a laser plane and to detect the location of the reflected stripes. Figure 4.2 Genex 3D FaceCam Profiling System As previously mentioned in section 2. The light grid has a rainbow color effect with the colors red. The field-of-view refers to the measured distance between the lasers’ light. or more complex shape) at a known angle onto an object . 4.5 shows an example of a structured lighting grid projected onto our metallic ramp object. This is the basic principle behind depth perception for machines. grid. The goal of the object profile during calibration is to make sure that the entire object will be in the field-of-view of the camera and laser during data acquisition. green and blue repeating. Figure 4. the stripe pattern is projected by multiple stripes at once onto the scene.Chapter 4: System Descriptions and Setup 25 the white line on the calibration grid in Figure 4. Scanning the object with the light pattern constructs 3D information of the shape of the object. they are coded (Coded Light Approach) with different brightness or different colors. the camera and object. (refer to Figure 4. In our acquisition system. structured lighting is the projection of a light pattern (plane. In most cases. This can be achieved with the sequence of projections using a grid of .3. In order to distinguish between the stripes. This method requires only a small number of images to obtain a full depth-image.2).5: Structured light grid pattern projected on the ramp with a neutral tan background The Coded Light Approach (CLA) is an absolute measurement method of direct codification . The distortion along the detected profile is used to compute the depth information.
There are two common forms of coded light approaches. the background should be a neutral color from the object. Direct codification is usually constrained to neutral color objects or not highly saturated colors. The tan background can be seen clearly in Figure 4.6: Genex 3D FaceCam System In our system.6 below). (see Figure 4.Chapter 4: System Descriptions and Setup 26 vertical lines (light or dark). the coded light pattern projected onto the ramp is seen. This snapshot was taken directly from the user . the projector is located under the center lens. Also in Figure 4.8. with an allotted distance of ±15 cm.6a is the Genex 3D FaceCam 500 System used in this project. left and centered image are obtain from the different cameras lens because of their view position in the system set up. The total distance is 85 cm. In our second acquisition system for this project we use the Genex 3D FaceCam System for reverse engineering purposes. In this system configuration.8. Figure 4. there is a specified distance of how far the object can be away from the Genex 3D FaceCam System when acquiring data. All the lines are numbered from left to right. a digital camera and a single projector are used verses one single camera and single projector as mentioned earlier. the two can be easily distinguished. When scanning. it is ideal to use a large range of color values or reduce the range and introduce periodicity in the pattern. Our purpose for selecting this system is to explore the accuracy and limitation of the machine for reverse engineering of automotive parts. three regular cameras. The use of three cameras yields three separate images in the results. In order to achieve a pattern where each pixel coordinate can be directly obtained. Figure 4. The digital camera is located on top of the center lens. coding based on grey levels and coding based on color. A right. In this system. Figure 4.7 shows the allotted distance when acquiring data from the Genex 3D FaceCam System. This is so that when eliminating the background information.
center and right camera photos of the object . Three different view angles are captured with the three cameras to generate a complete model of the left side of the image.8: Genex 3D FaceCam User Interface screen that displays the left.7: Distance specification for data collection Figure 4.Chapter 4: System Descriptions and Setup 27 interface screen of the Genex 3D FaceCam system after the left side of the ramp data was collected. as mentioned previously in this section. Camera 1 Camera 2 Object placement Camera 3 85 cm Back ground Figure 4.
The sloped side measures 3. . 5. there was no predetermined criterion for selecting the parts. The corner angles on the ramp are 90 degrees for the back and bottom surface. The main purpose was to select parts that were ideal for our system setup and that could be easily rotated. The main reason for using a ramp shape is that it should be easy to measure and compare the real world dimensions to the CAD coordinate measurements after the complete 3D model has been generated. If the part is flipped to have the holes on top. With the current position of the ramp according to the photo below (Figure 5.1 is not classified as an automotive part. there are three parts (objects) that have been identified.1 Part Identification Our research is geared toward reconstruction of automotive parts. a ramp and an arm pulley. It was designed on the basis of obtaining ground truth information for the structured lighting system used in our experiments. then the angle is slightly higher at 47 degrees. The measurements of the ramp are 4 inches on the base for length and the height.1).1. we present results of our complete modeling process: data acquisition.4 inches.Chapter 5: Results and Discussions 28 5 RESULTS AND DISCUSSIONS In this section. the side with the holes is a perfect 45 degree angle. We will show a comparison of the actual data to the original data. data segmentation (pre-processing and post processing) and a 3D CAD model. We will also show a comparison of the data results obtained from both systems. while the two shorter sides measure 1. The first part is shown below in Figure 5. The ramp object in Figure 5. All the selected parts are composed mostly of metal. The weight of the ramp is about 48 ounces (1. For this project. This difference in angles does not pose any major difference in obtaining a 3D model of the part. This specific ramp was specially designed for this project. For our part selection.587 grams).5 inches. a water pump.
5 inches wide on the lower base. 4 in. that it would not be balanced. 5.1: Photos of the ramp part (a) front and side views.Chapter 5: Results and Discussions 29 Front Back (a) 1. 1. (b) back and top views.4 in.2 was selected because of its complexity in shape and size measurements. The center of gravity refers to if the positioning of the water pump placed on the opposed side. (c) ramp measurements The water pump in Figure 5.5 in. The water pump top base screw has a black tented color. there is no defined center of gravity. 4 in. Due to the water pump having a non-symmetrical shape. .5 inches wide on the largest area and 2. (c) Figure 5.5 in.5 inches on the flat end. (b) 3. The size of the water pump is 10 inches in length. The total height of the water pump is about 4 inches on the larger half and 1.
(a) is the top view (b) is the bottom view .5 inches while the center measures 2.5 inches and the lower base measures 1.8 inches from the ground or working surface.2: Photos of the water pump part (a) top and side views (b) bottom view Figure 5. It also contains circular holes on the object that may cause pose a challenge in modeling due to occlusion. The pulley’s black circle ring measures 1. The pulley has a black circular ring located on the top end of the part.Chapter 5: Results and Discussions 30 (a) (b) Figure 5. (a) (b) Figure 5.3: Photos of the pulley arm. All measurements are taken by hand. The pulley arms measures about 11. The width contains three different measurements.5 inches in length. The height measurement is 2.3 is the pulley arm that was selected as the third object to reconstruct.5 inches.
4. Figure 5. Figure 5. These conditions include the ambient lighting from the room and outside lighting from the windows and doors. Each section is a different set of scans.Chapter 5: Results and Discussions 31 In the next section we will display our results from both the IVP Range Scanner and the Genex 3D FaceCam System. This figure also shows part of the front view. taken at different times. The ramp has a highly reflective surface that caused areas of the ramp to be occluded while collecting the data at different orientations.4: CAD model images of the ramp (a) left/front view. is the right side of the ramp. These different images are different view angles taken of the ramp. are some CAD model examples of the ramp. (c) bottom view.4.1 IVP Laser Range Data Results The first set of data results we will discuss are the results from the ramp using the IVP Range Scanner. the various views represent different angles of the ramp. Figure 5. (b) right side view. It also shows the other half of the front view from a different angle. (d) back view In the CAD model images of Figure 5. The results will be separated into three preliminary sections. Figure 5. of the part. a through d. The ramp posed a challenge while scanning because of the surface finish.4 c is the bottom view of the ramp and d is the .4 a. (a) (b) (c) (d) Figure 5.4 b is the left side of the ramp. Each different set of scans were taken with different varying conditions. 5.
Chapter 5: Results and Discussions
back view of the ramp showing the holes. Some of the details of the holes have been lost due to smoothing the surface of the ramp. More complete CAD models can be seen Figure 5.5. In these CAD images, the holes have been filled and the ramp’s surface has been smooth a second time. We then obtain a complete 3D CAD model of the ramp.
Figure 5.5: 3D Ramp CAD models, (a) left/front side view, (b) back/left side view, (c) bottom view, (d) right/top/ front side view The second set of data collected using the IVP Range Scanner was of the water pump. The water pump image can be seen in Section 2.1, Figure 2.1. The lighting factor for this system does not affect the data as much as it affects the Genex System data. The reason is because a filter can be placed on the camera lens to filter out any unnecessary light. For our experiments, we did not use the filter because the light source in our room environment was not a major issue. Figure 5.6 is the first attempt at reconstructing the bottom of the water pump. Figure 5.6 a, is a merge of the different views to obtained this CAD model. After merging the views using the volumetric merge technique, overlapping areas created holes in the model. The
Chapter 5: Results and Discussions
holes were filled to have a more complete water tight model. Part a, of Figure 5.6, is the solid mesh model and part b shows the point cloud information after the holes were filled in part a. The point cloud information usually shows the distance (spaces) between each point cloud, while the solid modeling technique shows a smooth continues image or view. The range images used to reconstruct the bottom surface can be seen in Figure 3.2 above. Figure 5.6 c is the solid mesh model after the holes filling was applied.
(c) Figure 5.6: Reconstructed bottom surface of the water pump, (a) Point cloud information, (b) Solid mesh model, (c) smooth reconstructed mesh model Figure 5.7 a, is a reconstruction of the top view of the water pump. It is displayed in varying colors to show the depth information of the water pump. The blue represents the highest part of the water pump while the orange color represents the surface that is closest to the ground. The green color represents the medium height level of the water pump. This image can be compared to the original photo in Figure 5.2. Figure 5.7 b, is the depth information or rotated side view of the top angle. This view gives a more vivid description of the depth of the water pump. This top view does not have the side views of the water pump merged to it.
Chapter 5: Results and Discussions
(b) Figure 5.7: Reconstructed views of the water pump, (a) reconstructed top view (b) side view reconstruction In Figure 5.8, these images depict the attempts of modeling the right and left side of the water pump. The areas that have occluded data must be taken at a different angle to the side views completely. Figure 5.8 a, and Figure b are the CAD model point cloud data information. Figure 5.8 part a as well as b show the height information in varying colors. Part c and d of Figure 5.8 are the second attempts at modeling the side views. The missing data can be seen clearly in these views. These views will be merged together to complete the side profile of the water pump. Figure 5.9, shows the complete 3D CAD model of the water pump obtained using the IVP Range scanner. In this figure, the front complete view of the water pump is shown. Again, the varying colors show the height changes in the water pump. The pulley arm results are displayed in Figure 5.10 b and a below. In this figure, the original photo image of the pulley arm is shown as well as the point cloud information. Figure 5.10 a, is the bottom side of the pulley arm and Figure 5.10 b is the top side. The color variations in the CAD model images are the height relative to the laser light. To complete this model, additional view must be merged to the current views to have a successful 3D model of the pulley arm. This will eliminate all the holes and occluded areas of the pulley arm.
9: Point cloud model of the water pump (a) CAD model showing height variations (b) top view. (c) right view and (d) left view (a) (b) (c) Figure 5.8: Cleaned point cloud data of the side views of the water pump taken with the IVP Range Scanner. (c) back view .Chapter 5: Results and Discussions 35 (a) (b) (c) (d) Figure 5. (a) right view. (b) left view.
11. (b) point cloud CAD model of the top side 5. and b. In part c and d. the black regions on the ramp are areas that were not captured due to positioning of the object and lighting factors.14) also served the purpose of controlling ambient light that was present from the window and room lighting. the ramp is position on a small black object to ensure that the detailed edges of the ramp are captured when collecting the data. parts a. The box (Figure 5. Figure 5. there must be a neutral back ground to contrast against the object being scanned. This would minimize the occlusion in the data sets. .10: Pulley Arm (a) photo of the side view profile of the pulley arm. In Figure 5.Chapter 5: Results and Discussions 36 (a) (b) (c) Figure 5. we used a box as the neutral back ground. the part was highly reflective. (b) point cloud CAD model of the bottom side. For the first set of data collection.14 shows an example of the water pump placed in the box to control the light and still maintain a neutral background. Controlling the ambient light was important in the case of the ramp because.11 are a few textured range images collected from the Genex System. as mentioned before. The different images show different views and orientations of the ramp. Figure 5. We wanted to eliminate as much of the light being reflected off the part back into the camera lens.2 Genex Structured Light Data Results To collect data using the Genex 3D FaceCam System.
is the right side of the ramp. (b) right/ back side view.12.Chapter 5: Results and Discussions 37 (a) (b) (c) (d) Figure 5. All the views can be compared to the original photo image in Figure 5. In this reconstruction some of the edge detail is missing (occluded) due to the edges being reflective.1 of section 5. Part c and d of Figure 5. To solve this problem. Figure 5. Figure 5. (a) back view. (c) left view and (d) front view In the images that were captured of the ramp. .12 are the reconstructed front and left views. Figure 5. Examples of the missing data can be seen clearly in the CAD models of the ramp displayed in Figure 5. is the back view of the water pump. there is data missing in sections of the ramp.12 a. More views of the ramp must be taken at different angles to fill in the missing data.12 b.11: Textured ramp images using the Genex System. more scans are taken of the ramp and merged to the already acquired range images. This is because these parts of the ramp reflected the light that was projected onto the object by the cameras during data capture.12 are the first attempts of reconstructing the ramp using the Genex System.
12.13 b. (a) right view. Compared to Figure 5. (b) back view.13 b is the back CAD model view. Figure 5. . you can see that there is some miss alignment of the views causing the unleveled edge detail. these CAD models images show the edge details that were missing and also have a smoother finish.Chapter 5: Results and Discussions 38 (a) (b) (c) (d) Figure 5.12: Genex System reconstructed views of the water pump. With proper smoothing. In Figure 5. this unevenness can be fixed. (c) front view and (d) left view Figure 5. Figure 5. This is the second attempt at modeling the ramp.13 are the complete 3D CAD models of the ramp obtained from the Genex System. is the front CAD model view.13 a.
13: Complete final 3D model of the ramp (a) front view (b) back view (c) side view (d) bottom view Figure 5.Chapter 5: Results and Discussions 39 (a) (b) (c) (d) Figure 5.14: Water pump placed in box for neutral background .
The blades of the bottom surface can be seen clearly and the edges are more distinct as you view the images starting from b and ending at d. To obtain the detail of the holes along the outer brim of the water pump. (b) view of the bottom surface. The edges of the water pump are not refined or smooth. is the point cloud CAD model of the bottom surface.15 shows a few of the textured water pump range images that were captured using the Genex 3D System. (a) top view.Chapter 5: Results and Discussions 40 Figure 5. (a) (b) (c) (d) Figure 5. The reconstruction efforts yielded successful results as far as most of the detail of the water pump. b through d are more detailed CAD models of the bottom surface. Figure 5.16 a.16 are the CAD models of the water pump. These CAD models displayed are the first attempts at reconstruction of the bottom surface of the water pump.15: Textured water pump images using the Genex System.16.16 d has a smoother surface finish compared to b and c. The images contrast has been enhanced for better viewing of the details captured. The excess background does appear in the raw data files from the Genex 3D System. Figure 5. These images also show some of the background from the box that was captured. (c) side view and (d) top view Figure 5. It is easily cleaned away using some data reduction techniques in the software. more views can be merged to the current model. . Also Figure 5.
17 displays the first and second attempts of reconstructing the top surface of water pump. (a) surface reconstruction with missing data and (b) complete surface reconstruction with filled holes . unsmooth surface textured and (d) smooth textured surface Figure 5. Some hole filling was also performed on the CAD model. is still maintained.17: Genex System reconstructed top surface of the water pump. seen in the second attempt of Figure 5. The loss of detail can be seen on the outer brim on the holes of the model in Figure 5. In the first attempt. (a) point cloud data. with any smoothing technique that is used.17 b.16: Genex System reconstructed bottom surface of the water pump.Chapter 5: Results and Discussions 41 (a) (b) (c) (d) Figure 5. Nevertheless. The hole filling creates a more complete and smooth surface in the final model.17 b. (a) (b) Figure 5. The occluded data is recovered with more range scans. (b) solid mesh model. there are occlusions in the center of the pump. some of the detail of the CAD model will be lost. However. on the ridges. a good percentage of the detail.
the details of the side profile of the CAD model still contains some unsmooth surfaces. Figure 5. Figure 5.19 are the original photos of the water pump. it becomes more distinct of how the views are merged together to create a complete model. This 3D CAD model shows all the details of the original water pump. The photos are placed there for comparison purposes to the 3D model in c through f. By applying the smoothing technique again to the model. The final merging can be seen below. The views show the front. they show a more complete top and side view.19 shows four different views of the water pump. As the CAD images are viewed from Figure 5. is the right side and part b is the left side. and b is the left side view.19. . A better angle of the views can be seen in parts c and d of the image. Figure 5. is the right side view. in Figure 5. the unsmooth surfaces will even out. This extra smoothing results in the loss of important details. There is however a drawback to smooth the surface several times. This drawback is that the smaller details of the edges and raised surfaces may also be smooth. are some CAD models of the side views of the water pump using the Genex System.18 below. to f.18 d is a merging of two different views to complete the left side of the water pump. left side and back of the water pump. Once the left side is complete. These images show more detail of the sides of the water pump. The first two images in Figure 5.18 a.Chapter 5: Results and Discussions 42 Figure 5. Part a. right side. The views of the right and left side are used to complete the top and bottom surfaces of the water pump. When merged together.18 c is another right side view that is angled to show how much of the detail is captured when the part is repositioned at a new orientation. the right side CAD model can be merged onto the existing model. The next section is a conclusion of this paper.18 a. Figure 5. As compared to the original water pump image.
(d) left/angled view. (b) left view.Chapter 5: Results and Discussions 43 (a) (b) (c) (d) (d) (f) Figure 5. (e) front/angled view and (f) top/left angled view .18: Genex System reconstructed views of the water pump. (c) right/angled view. (a) right view.
. (c – f) Final 3D CAD model views of the water pump.Chapter 5: Results and Discussions 44 (a) (b) (c) (d) (e) (f) Figure 5.19: (a) and (b) Water pump photo image.
The blue regions represent the surfaces that are touching.Chapter 5: Results and Discussions 45 5. .20: Standard deviation calculations of the IVP ramp overlapped with the Genex ramp. Figure 5. a comparison of the IVP Range Scanner and the Genex 3D FaceCam is done. In this figure.3 Comparison: IVP Laser Range Scanner and Genex 3D FaceCam Data In this section. a b c d e Figure 5.20 are rotated examples of the standard deviation calculated from both ramp models. the IVP ramp and the Genex ramp are overlapped on top of each other to compare the surface differences. We first compare the ramp models generated from both systems.
The position 0.Chapter 5: Results and Discussions 46 Meaning that. in the overlapping surfaces the blue region represents a distance of 0 mm between the two surfaces.22 is to show the comparison with the measurements of the sides used in the table.21: Standard deviation of the IVP ramp overlapped with the Genex ramp model Table 1 below shows the derived measurements of the ramp compared to actual measurements.0 means zero positioning of the coordinate distance. . Figure 5. The measurements are off from the actual measurements. The red region represents a maximum deviation of 3. The standard deviation is calculated by registering the two models into one shell and then using the standard deviation option in the software to perform the calculations.21. The average of the deviation between the surfaces is calculated to be about . This is because the points used to determine the measurements are chosen manually.91693 mm.9 mm distance apart shown in Figure 5. Figure 5. The distance values shown here may be a little different from the ones shown in real time when the deviation is calculated. This is because the picking of the points are manual and leave error for not always being precise. The derived measurements are from the 3D CAD model that was generated from the data collected using both systems.
3 in. Take the water pump for example (Figure 5. (38. This is primarily because the current setup is built in house according to our specifications. and overall flexibility to acquire data.814 mm) Genex 3D FaceCam Results 2. Table 1 Actual Ramp Measurements width: 3 in (76.62 in (91.6 mm) height: 4 in (101. The IVP Range Scanner requires fixing the object at particular angles to capture the same profile.1 mm) IVP Laser Range Scanner Results 2. This could require placing the object on a smaller object that is able to absorb the laser light or be hidden from the laser.86 in (98. This makes it easier when overlapping the views to recreate the object. there is less occlusion in the data.948 mm) 3.5 in.422 mm) 3. Both systems are user friendly as far as setup and data collection is required. Fewer views means.02 mm) Table 2 is a comparison of both the IVP Range Scanner and the Genex 3D FaceCam system.6 in (91. in that they can both be easily moved.2).012 mm) 1.44 mm) 3.41 in. The contrast is that the IVP Range Scanner requires some assembly of the system.2 mm) length: 4 in (101.95 in (74. This comparisons table was generated based on observations of the systems performance.044 mm) 1. the Genex system is able to capture the side and either some area of the top and bottom of the water pump.5 in 4 in 4 in 3 in Figure 5.78 in (96. The flexibility of both systems is the same in comparison. (35.Chapter 5: Results and Discussions 47 1.93 mm) 3.22: Simulated ramp. because the side of the water pump is small in area.6 mm) 1. The contrast in the systems is that when setting up the IVP Range Scanner to .93 in (74. This picture shows the measurements of the ramp. (33. The Genex system has the advantage of fewer views when the object is of high complexity.
This is because the Genex system captures data from the front of the object as opposed to on an angle like the IVP Range Scanner. then needs to be redone Portability of the system is not optional Difficulty registration when not symmetrical Less noise in previous scans with new setup 2 3 4 5 6 7 8 9 Genex System Creates a distortion from the ambient lighting Difficult in position of objects with current system set up Needs fewer views to complete 3D object No calibrating or y scaling No manual conversion of the world coordinate data Overall system is user friendly Portability is manageable Difficulty registration when not symmetrical Overall system has better resolution .Chapter 5: Results and Discussions 48 acquire data. The ambient lighting (if there is too much present) can saturate the object causing distortion to the data acquired. the system must be calibrated before any data is collected. However. the speed should be slowed as to avoid ridged jerks of the laser as it scans the profile of the object. the IVP creates a shadow affect of the object. the world coordinate points have to be input manually to make sure that the data is transformed properly into the 3D software. what does affect the Genex system is the ambient lighting that is in the room. Both systems pose the same problem when registration of different views comes into play. The noise for the IVP can be reduced as long as the speed of the scan is sufficient to have a smooth surface. The noise in the images from both systems is not a big factor when collecting data. After the scans of the object have been collected. The way to over come this problem is the next best view of capturing the object from certain angles where there is overlap of the surfaces. The achieve a smooth surface. This is because of the laser light reflecting the object. in the process of calibrating the IVP. The Genex system has the option of reducing noise when the post-processing is performed on the collected data. Also the data must have some y-scaling applied to it to make sure that the acquired data measurements are similar to the real data of the water pump. Table 2 IVP Range Scanner 1 2 3 4 5 6 7 8 9 Creates a shadow effect when scanning 1 Difficult in position of objects (complexity in shape) Needs more overlapping views Calibration of the system and y scaling after scans Manual input of the world coordinate information If calibration is not correct. The Genex 3D FaceCam does not have this problem. As mentioned previously.
A triangulation based system does not have the problem of ambient lighting affecting acquisition of data. In this project. As we revisit the data acquisition systems that were chosen for this project. The Genex System is more user friendly verses the IVP Range Scanner because there is no calibration involved in the set up process. Successfully generating a 3D model of the ramp using the Genex 3D FaceCam System. Successfully modeling the top and bottom surfaces of the water pump using the IVP Range Scanner. with two different techniques. The structured lighting system is affected by the .Chapter 6: Conclusion 49 6 CONCLUSION Reverse engineering of geometric models and parts for CAD use is a rapidly evolving discipline in which interest is currently high. we have successfully made several achievements to the reconstruction efforts of modeling our selected parts using two different systems. Lasers and Structured Lighting Applications. the use of the filter was not required. The IVP Range Scanner has the advantage that ambient lighting is filtered better through the use of filters that can be attached to the lens. guarantees the longevity of the equipment. Using a laser-based triangulation system and a structured lighting system has many different benefits that make both systems ideal for 3D reconstruction. Our achievements are: • • • • Literature survey on reverse engineering using CMM. Following the correct steps when turning the system on and off. necessary steps that must be followed to assure that the Genex Systems components are not affected when operating the system. we will discuss some of the benefits of using both systems. The IVP Range Scanner and the Genex System have different advantages to the systems that make them optimal choices for data acquisition task. This is due in part to the recent commercial availabity of active non-contact systems that can produce some level of sufficient accuracy for many applications. There are however. Successfully modeling the water pump using the Genex System. For the parts used in our experiments. We also highlight some of our achievements in generating the 3D CAD models using both systems.
After multiple scans were acquired. The Genex System can be relocated at the discretion of the user. However because of the structural setup of the IVP Range Scanner. the post-processing of the data was similar. Using the laser scanning technique to model parts and the structured light technique both proved to have successful and promising results. The importance of obtaining multiple single view profiles was also discussed. the 3D CAD model can continue to be improved upon. The ambient lighting however was controlled through the use of additional backgrounds placed around the part. Both systems produced results that are less noisy and have smoother surface textures. more scans are required to produce a complete 3D model of the part.Chapter 6: Conclusion 50 amount of ambient lighting and natural lighting sources. the water pump and the ramp had CAD models generated from the data. The reason is because of the flexibility of relocation and adjusting the IVP Systems components is far too tedious. If multiple views of all the selected parts are acquired. With the use of smaller objects. . different angles could be captured. position was a challenge. For the water pump. Also because the system is flexible in location. the ambient lighting can also be better controlled. Even though the two techniques modeled the parts using different methods. The models will show to have less occlusion or missing data and have data that is free of noise and abnormal surfaces due to surface reflectance or object positioning. This plays an important factor when scanning objects that have high reflective surfaces.
Sensor review. Dorai. G. Lee and K. “Self Recalibration of a Structured Light Vision System from a Single View”. Vol. 1997. 2000. IEEE International Conference on Robotics and Automation. 3. and K. Bardell. “Accuracy Analysis of 3D Data Collection and Free-Form Modeling Methods”. Vol. S. 1991. Lucchese. pp. “The 3D Model Acquisition Pipeline”. “Freeform Textured Surfaces Registration by a Frequency Domain Technique”. 10. 3. M. Proceedings of the International Conference on Image Processing. G. D. T. 14. Y. J. V. Butler. No.145–155. K. 2. McKay. Image and Vision Computing. J. No. pp. Doretto and L. F. M. Vol. F. C. 1131–1138. Clark. Medioni. 20. J. Y. 2. Industrial Metrology. “Object Modeling by Registration of Multiple Range Images”. 1. 21.          . “A Method for Registration of 3D Shapes”. IEEE Transactions on Pattern Analysis and Machine Intelligence. “Implementing Non-Contact Digitization Techniques within the Mechanical Design Process”. pp. C. 813–817. Journal of Materials Processing Technology. 2002. No. Sivayoganathan. S. 2002. Besl and N. Computer Graphics Forum. Kengskool. 19. pp. 10. 195-201. pp. pp. Cortelazzo. Balendran. “Development of an Integrated Laser-Based Reverse Engineering and Machining System”. ICIP ’98. Rushmeier. 1998. 2003. pp. Bernardini and H. 149-172. 2. Chen and G. J. 2000. Weng and A. No. 239–256. Jain. “Investigation Into the Performance of Probes on Coordinate Measuring Machines”. 19. 133. Vol. 1992. 186-191. Vol. Vol. pp. 2539-2544. International Journal of Advance Manufacturing Technology. Vol. pp. Chow. 1992. P. “Optimal Registration of Object Views using Range Data”. 26-33. Li. No. IEEE Transactions on Pattern Analysis and Machine Intelligence. C. Chen and Y. pp 59-70. No.References 51 REFERENCES  R. Xu.
Wang. 709-713. Proceedings of the SPIE. “A Part Image Reconstruction System for Reverse Engineering of Design Modifications”. Computer Vision and Image Understanding. H.735-743. Lee. Park. Vol. A Laser Time-of-Flight Range Scanner for Robotic Vision. T. Bidanda. IEEE Transaction on Robotics ad Automation. pp. 20. G. I. R. H. “Automated Inspection of Free-Form Shape Parts by Laser Scanning”. IEEE Transactions on Pattern Analysis and Machine Intelligence. No. Robotics and Computer Integrated Manufacturing 16. Lee and H. 1. 8. S. Modjarred. Jarvis. 1988. 29. No. Myllyla. Fisher. 28. H. 1991. 2. 1998. pp. 259. No. Fitzgibbon and R. Eggert. 253– 272. “Optimal Shape Error Analysis of the Matching Image for a Free-Form Surface”. pp. "Robust Automatic Surface Reconstruction with Structured Light. 1012. Vol." International Archives of Photogrammetry and Remote Sensing. 19. Robotics and Computer Integrated Manufacturing. Mercer. part B5. K. No. "Data Reduction Methods for Reverse Engineering". Chen. 5. Moring. A. A. IEEE PAMI. 5. 1998. A. Y. Vol. Woo and T. “Simultaneous Registration of Multiple Range Views for use in Reverse Engineering of CAD Models. A. 10. F. 5. “Non-Contact Measurement Using a Laser Scanning Probe”. Vol. Vol. pp 897-902. Vol.           . 215-222. Vol.268. In-Process Optical Measurements. 201210. Journal of Optical Engineering. 17. The International Journal of Advanced Manufacturing Technology. K. 3. 1983. D. Fan and T. 2000. pp 383-395. K. vol. pp 505-512. 69. Motavalli and B. pp. Y. pp. “Automatic Recalibration of an Active Structured Light Vision System”. pp. W. Maas. “Acquisition of Three. Tsai. pp.References 52  C. 2003. W. Vol. 83–89. Jain and C. 229-239. Heikkinen.Dimensional Image Data by a Scanning Laser Range Finder”. No. R. Journal of Manufacturing Systems. K. 2001. pp. 2001. 1989. “Registration and Integration of Multiple Object Views for 3D Model Construction”. Dorai. Li and S. 1992. Suk. H. No. B.
Wang and J. K. “A General Surface Approach to the Integration of a Set of Range Views”. pp 179-190. H. Linney. C. K. pp. Pennington. Vol. W. 11-18. Grindrod." IEEE Computer Vision and Pattern Recognition. In Proceeding of the 2nd International Conference on 3D Digital Imaging and Modeling. 17. pp 85-92. 2003. 1995. pp 669-680. 113. “Registering Two Overlapping Range Images”. 17. D. IEEE Trans. Germain. 1. 1989. M. 311-318. S. Vol. Vo. 1999. “3D Object Description From Stripe Coding and Multiple Views”. 6. Turk and M. 1972. “Localization of 3-D Objects Having Complex Sculptured Surfaces Using Tactile Sensing and Surface Description”. Soucy and D. Vol. pp. D. 1999. 6. A. “Grid Coding: A Novel Technique for Image Processing”. Glassner (ed). Will and K. 15. M. “Zippered Polygon Meshes from Range Images”. pp. Allen. Sahoo. 1987. Vol. Moss. C. Vol. Vol. Y. 1344-358. Scharstein and R. G. 57-66. Image and Vision Computing. 191–200. Mosse. Ottawa. S. IEEE Transaction on Pattern Analysis and Machine Intelligence. Reed and P. Szeliski.           . R. B. 1. Levoy. Proceedings-16th-Brazilian-Symposium-onComputer-Graphics-and-Image-Processing-SIBGRAPI-2003. 1994. Proceedings of the IEEE. Aggarwal. Canada: pp. G. J. pp. 60. Vol. In A. 2003. 1999. Journal of Engineering for Industry. J. A. P. Menq. Computer Graphics Proceeding Annual Conference Series. No. pp 669-680. F. 195-202. “A Laser Scanning System for the Measurement of Facial Surface Morphology”. 99-111.References 53  J. Proceedings of SIGGRAPH. “A Hole-Filling Strategy for Reconstruction of Smooth Surfaces in Range Images”. on Robotics and Automation. No. W. 4. No. 1991. 60. Journal of Optics and Lasers in Engineering. Proceedings of the 5th Scandinavian Conference on Image Analysis. pp. “Featured-based Reverse Engineering of Mechanical Parts”. No. J. Roth. C. 10. Thompson. Laurendeau. Wang and M. Oliveria. pp. Owens and H. “3D Modeling from Range Imagery: an Incremental Method with Planning Component”. C. de St. P. "High-Accuracy Stereo Depth Maps Using Structured Light.
pp 339-357. Xiong. The American Heritage Dictionary of English Language. pp. 472-475. 13(2):119–152. Vol. D. Zhang. Houghton Mifflin Company. “Computer Aided Measurement of Profile Error of Complex Surfaces and Curves: Theory and Algorithm”. “Harmonic Maps and Their Applications in Surface Matching”. 2000. International Journal of Machine Tools and Manufacturing.     . 2003. “Iterative Point Matching for Registration of Free-Form Curves and Surfaces”. Vol. 1999. Fourth Edition. Journal of Materials Processing Technology. 139. Y. No.References 54  Y. Z. L. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’99). 3. Zhang and M. Hebert. 30. International Journal of Computer Vision. pp. Zhang. 1994. “Research into the engineering application of reverse engineering technology”. 1990. 524–530.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.