Appearance Compression and Synthesis based on 3D Model

for Mixed Reality

Ko Nishino Yoichi Sato Katsushi Ikeuchi
Institute of Industrial Science, The University of Tokyo
7-22-1 Roppongi, Minato-ku, Tokyo 106-8558, Japan
{kon,ysato,ki}@cvl.iis.u-tokyo.ac.jp

Abstract by interpolating multiple images. Since the main purpose
of image-based rendering is to render virtual images simply
Rendering photorealistic virtual objects from their real from real images without analyzing any reflectance char-
images is one of the main research issues in mixed reality acteristics of objects, the method can be applied to a wide
systems. We previously proposed the Eigen-Texture method, variety of real objects. And because it is also quite simple
a new rendering method for generating virtual images of and handy, image-based rendering is ideal for displaying an
objects from thier real images to deal with the problems object as a stand-alone without any background for the vir-
posed by past work in image-based methods and model- tual reality. On the other hand, image-based methods have
based methods. Eigen-Texture method samples apperances disadvantages for application to mixed reality. To avoid
of a real object under various illumination and viewing con- making the system complicated, few image-based render-
ditions, and compresses them in the 2D coordinate system ing methods employ accurate 3D models of real objects.
defined on the 3D model surface. However, we had a serious Although Lumigraph[3] uses a volume model of the object
limitation in our system, due to the alignment problem of the for determining the basis function of lumigraphs, the vol-
3D model and color images. In this paper we deal with this ume model obtained from 2D images is not accurate enough
limitation by solving the alignment problem; we do this by to be used in mixed reality systems, i.e. for the purpose of
using the method orginally designed by Viola[14]. This pa- making cast shadows under real illuminations correspond-
per describes the method, and reports on how we implement ing to the real background image.
it. Unlike image-based methods, model-based methods[10,
11] analyze the reflectance characteristics on the surface of
the real object to render photorealistic virtual objects by as-
suming reflectance models. Since the reflectance parame-
1. Introduction ters are obtained at every surface point of the object, inte-
gration of synthesized images with real background can be
Recently, mixed reality systems have been one of the accomplished quite realistically; the method can generate
main topics in the research areas of computer vision and a realistic appearance of an object as well as shadows cast
computer graphics. Main research issues in these mixed re- by the object onto the background. However, model-based
ality systems include how to render photorealistic virutal rendering has nontrivial intrinsic constraints; it cannot be
objects from their real images, and then how to seamlessly applied to objects whose reflectance properties cannot be
integrate those virtual objects with real images. Our inter- approximated by using simple reflectance models. Further-
est is on the former topic: rendering virtual object images more, since computation of reflectance parameters needs
from real images. The CV and CG communities have pro- the surface normal at each point of the surface, it cannot
posed two representative rendering methods to obtain such be applied to objects whose surfaces are rough.
virtual images from real objects: image-based and model- To overcome the problems posed by the previous meth-
based methods. ods, we previously proposed a new rendering method,
The basic idea of representative image-based methods[1, which we refer to as the Eigen-Texture method[7]. The
2, 3, 5], is to acquire a set of color images of a real object Eigen-Texture method creates a 3D model of an object from
and store them on the disk of a computer efficiently (via a sequence of range images. The method aligns and pastes
compression), and then synthesize virtual object images ei- color images of the object onto the 3D surface of the object
ther by selecting an appropriate image from the stored set or model. Then, it compresses those appearances in the 2D

0-7695-0164-8/99 $10.00 (c) 1999 IEEE

lar point x on the model surface. Also. and v(T (x)) the property The remainder of the paper is organized as follows. and we solve the alignment problem of the a formulation of the mutual information between the model 3D model and color images by using the method originally and the image to predict the image from the model. v(T (x))) ≡ describe our future work and conclude this paper. if the object has an Lambertian surface. this method aligns 3D model and inten- sity image by maximizing the mutual information between them(2. and discuss the results of the experiments we conducted to demonstrate the efficiency of the method.1.1. 9] to obtain the range image sequence. Given a 3D model of an object and a pose. v(T (x))) (1) The first term in Eq. the method does not assume any reflectance model as model. Cast shadows can be generated using the 3D model. The alignment method proposed by Viola is based on range images. using the method originally designed by Viola. such as reflectance models. Let us define u(x) the property of the model at a particu- tual information. The first step of our method is to align each intensity image to the 3D model. Usually the intensity images and the range images are taken in different coordinates. the predicted and actual images should be identical. Eigen-Texture Method a function of T . we describe the implementation of the proposed method the image can be described as Eq. color image. designed by Viola[14]: alignment by maximization of mu. we experienced a severe constraint in our could be used to predict the image that will result. The predicted image can then be directly compared coordinate.1 is the entropy in the model. In Section 4 we I(u(x). it compresses those ap- pearances in the 2D coordinate system defined on the 3D model surface. model for the imaging process. In Section mation T . We ac- complish this alignment by using the method originally de- signed by Viola. This compression is accomplished using the Figure 1. It encourages In this section we describe the theory of the Eigen. of the image v. In this paper. Outline of the Eigen-Texture eigenspace method. Alignment coordinate system defined on the 3D model surface. and the alignment problem of the 3D model to the actual image. a ages sampled under single illuminations.00 (c) 1999 IEEE . we describe the Eigen-Texture method. The mutual information between the model and 3. Figure 1 displays an overview of the proposed method. The synthesis process is achieved using method. Once each intensity image is aligned. it can be applied to a wide variety of the 3D model. The Eigen-Texture method takes both intensity images and range images as the input. transformatins that project the model u into complex parts Texture method. If the object model and pose are correct.1). maximiz- virtual images under a complicated illumination condition ing the mutual information between the 3D model and each can be generated by summation of component virtual im. Eigen-Texture method aligns each input color image with based methods do. registration and integration of each mesh models. The second term is the entropy of the part of the image into which the model projects. However. and the color images could be disregarded. thanks to the linearity of image brightness. The third term is the negative joint entropy 0-7695-0164-8/99 $10. the intensity finder[4. 2. Then. For ex- system flexibility. This alignment can be accomplished by objects. and mesh decimation of the final 3D model [13. or close we describe how we use a laser range finder to obtain the to it. and a 3D model of an object is created from a sequence of range images with mesh gen- eration from each range image. unless we use a light stripe range finder to acquire both image sequences with the same CCD camera. and is not 2. h(u(x)) + h(v(T (x))) − h(u(x). Since we used a light stripe range ample. Because the After generating an accurate 3D model of an object. 15]. In of the image at the point corresponding to x with transfor- Section 2. the range image can be predicted from surface normals of the object images and the color images were acquired in the same surface. the method pastes each intensity images of the object onto the 3D sur- face of the object model. the inverse transformation of the eigenspace method.

the entropy of a random vari. Each triangular patch is normal- The covariance matrices of the component densities used in ized to have the same shape and size as that of the others. xk ∈A Gψuv (wi − wk ) 2. Viola’s method first estimates the en- tropies from samples: approximates the underlying proba- bility density p(z) by a superposition of Gaussian densities centered on the elements of a sample A drawn from z. as well as the surface xk ∈A Gψv (vi − vk ) normal at each point of the model surface. and approximate statistical expectation with the sample av- erage over another sample B drawn from z. This value depends predominantly −1 d on the texture of the model surface. v(T (x))) other laser range finders. 0-7695-0164-8/99 $10. ψvv ) (7) patches on the 3D model. we used these re- Gψv (vi − vj ) flectance power values as the property of the model to be where Wv (vi . wi ≡ [ui . Regarding this fact. The pixel values Gψuv (wi − wj ) Wuv (wi . Eq. we used a laser range c dI d d finder to acquire the range image sequence. wj )ψvv ] (vi − vj ) (6) dT relative with the intensity values of the object surface in their color images. each color image −1 −1 −1 is divided into small areas that correspond to triangular ψuv = DIAG(ψuu . vi ≡ v(T (xi )).00 (c) 1999 IEEE . wj ) ≡ P are evaluated as the property of the image. 1 A clear image can be found in the color version of this paper put on agonals(Eq. The alignment problem of the model and the images can be replaced by finding an estimate of the transformation T that aligns the model and image by maximizing their mutual information over the transformations T . The local maximum of mutual information the CD-ROM of the proceedings. and it encourages transformations where the model explains the image well. NA zj ∈A 1 X Ez (f (z)) ≈ f (zi ) (4) NB zi ∈B is sought taking repeated steps that are proportional to the −n −1 1 approximation of the derivative of the mutual information where Gψ (z) ≡ (2π) 2 |ψ| 2 exp(− z T ψ −1 z) 2 with respect to the transformation as follows. vi ]T After the alignment is accomplished. the approximation scheme for the joint density are block di.the model and the image. A color image and a range image p(z) ≈ Gψ (z − zj ) (3) with the reflectance power attribute. The proce- With these approximations. Appearance Compression and Synthesis ui ≡ u(xi ). 1 X Figure 2. vj ) ≡ P evaluated in the alignment procedure. Likewise most = h(v(T (x))) − h(u(x). v(T (x))) (2) T To solve this problem. Tˆ = arg max I(u(x).7).4. T ← T + λ dT dI imate the derivative of the mutual information as follows: In the experiments described in 3. Eq. vj )ψv−1 of the laser at each point the laser hits on the object surface NB xi ∈B xi ∈A as shown in Figure 21 .3. dure is repeated a fixed number of times or until conver- able z can be approximated as follows: gence is detected. the laser range finder we used dT dT dT 1 X X (PULSTEC TDS-1500) returns the reflectance power value = (vi − vj )T [Wv (vi . h(z) = −Ez (ln p(z)) −1 X 1 X Repeat : ≈ ln Gψ (zi − zj ) (5) A ← {sample of size NA drawn f rom x} NB NA zi ∈B zj ∈A B ← {sample of size NB drawn f rom x} ˆ This approximation of entropy may now be used to approx.2. so that it is highly cor- − Wuv (wi .

xC r m.1 .. 2. A sequence of cell images from the same cell is col... Here. ek ject. necessary to represent each cell image in a exact manner. N is the number of pixels in each cell image and m is the pose number..e. and determine eigenvectors ei and the corresponding eigen- values λi of Q by solving the eigenstructure decomposition problem. each The color images are represented in RGB pixels with cell image can be projected on to the eigenspace composed 24-bit depth... λi ei = Qei (12) At this point. but the compression is accomplished in by matrix Q by projecting each matrix Xa . k} (where ei is by the following steps: a 3N × 1 vector) obtained by using the process above. Then the sequence of cell images can be represented as a M × 3N matrix as shown in Eq. we extract lected as shown in Figure 3.. . it is P3N i=1 ≥ T where T ≤ 1 (13) possible to interpolate appearances in the eigenspace. each cell image tion of each cell image can be described as a M × k matrix is converted into a 1 × 3N vector Xm by arranging color G. and then computing the eigenratio (Eq.. . First. Here this sequence depicts ap. values for each color band Y Cr Cb in a raster scan manner (Eq.00 (c) 1999 IEEE .9. These cell images corre. 3N dimensions. of cell images... Note that the compression is done in a sequence eigenvalues. image with adequate accuracy. the input color image sequence is con- h i verted to a set of cell image sequences. compress the image set. which is the subset 0-7695-0164-8/99 $10.   E .   G = Xa × V where V = e1 e2 . Furthermore. This paper refers to this normalized a small subset of them is sufficient to describe the prin- triangular patch as a cell and to its color image as a cell cipal characteristics and enough to reconstruct each cell image. and each sequence Xm = xYm.c∈{Y. we can substantially under various viewing conditions. E Xa = X −  .1 . whose appearance change are due only to the change of brightness..N (8) of cell images is stored as the matrix V. the eigenspace of Q is a high dimensional space. high compression ratio can Pk λi be expected with the eigenspace method.Cr . E 1 X M N E = xci. ized triangular patch.. xC b m.j 3M N i=1. To put it concisely. Although 3N dimensions are Color images on the small areas are warped on a normal.... . And the projec- Y Cr Cb using 4 : 1 : 1 subsampling. XTM (9) The average of all color values in the cell image set is sub- tracted from each element of matrix X.  T X = XT1 XT2 ..Cb } With this M × 3N matrix.. A sequence of cell images. we define a 3N × 3N matrix Q. k (k  3N ) eigenvectors which represent the original pearance variations on the same physical patch of the object eigenspace adequately. i=1 λi Eigenspace compression on cell images can be achieved Using the k eigenvectors {ei | i = 1. This ensures that the eigenvector with the largest eigenvalue represents the dimension in eigenspace in which the variance of images is maximum in the correlation sense.. Q = XTa Xa (11) Figure 3.  (10) E . i. Thus.j=1.8). M is the total number of poses of the real ob. by this process. The k eigenvectors can be chosen sponding to each cell are compressed using the eigenspace by sorting the eigenvectors by the size of the corresponding method. Accordingly.13).

00 (c) 1999 IEEE . and use light sources fixed in the world coordinate to to precisely synthesize the image. so that the compression ratio becomes that described in error per pixel Eq. For metric stereo[16] and Shashua’s trilinear theory[12]. termine the number of dimsensions of the eigenspace. Graph of the eigen ratio and the error per pixel versus the number of dimensions of 3. The number of dimensions for each cell re- not be approximated by simple Lambertian reflection model quired to compress the sequence of input images and to re- 0-7695-0164-8/99 $10. due to nonlinear factors such as specular reflection and self shadow.i eTi + [E E . and have applied the Eigen-Texture method to real objects. The in which the sequence of cell images are stored is a non. 3D shape of the object. we attach the object to a rotary ta. as the reflection of most general real objects can. 15]. After the range images are converted into trian. incrementing the rotation ber of dimensions of the eigenspace independently for each angle as well as the range image sequence.15. we computed the eigenratio dimensions are enough for compressing and synthesizing with Eq. sponding eigenvalues satisfied a predetermined threshold of However. A 5 virtual object image of one particular pose (pose number m) can be synthesized by aligning each corresponding cell error per pixel appearance (Rm ) to the 3D model. System Setup store all the appearance change of each cell. Implementation eigenspace. We used interval is smaller than that of the range image sequence. which is the pro. Figure 4 describes the relation between the eigen ra- describe in the next section. 30◦ by each step in this ex. they are merged and simplified according to whether they have highlights or self shadows to compose a triangular mesh model which represents the in thier sequences.of eigenvectors of Q. gular mesh models[13. pact on the number of dimensions of eigenspace neccesary ble. we determined the num- by a SONY 3 CCD color camera. and the error per pixel describes the average difference of the pixel values Determining the number of dimensions of the eigenspace in 256 graduation in the synthesized and real image. eigen ratio 10 M + 3N compression ratio = k (14) 3M N Each synthesized cell image can be computed by Eq. but the rotation cell so that each could be synthesized precisely. trivial issue. eigenratio to determine the number of dimensions for each For instance. the correlation of each cell image becomes low. Here the eigen ratio is the ratio of the eigen 3.13. 0. A sequence of color images is taken With regard to these points. As we described in Eq. as it has significant influence on the quality of so that it is reasonable to threshold the eigen ratio to de- the synthesized images. 1 eigen ratio jection onto the eigenspace. Also the quality of the geometric model has serious in- For the experiments. A sequence of mensions are needed.2. We have implemented the system described in the previ- ous section. the higher the eigenspace di- laser range finder PULSTEC TDS-1500 . a step of 3◦ was used for the first experiment cell. The simpler the construc- illuminate the object. tio... and the rotary table for each step.8. E] (15) i=1 Figure 4. and is stored as 3N × k matrix V and M × k matrix G. three dimensional eigenspace is not appropriate to 3. So the num- periment. the eigenratio. and used the first k eigenvectors whose corre- the appearance of an object with a Lambertian surface. eigen ratio is in inverse proportion to the error per pixel.9 1 2 3 4 5 6 X k dimensions of eigenspace Rm = gm. Cell-adaptive Dimensional Eigenspace values in the whole summation of them. since the trianglular patches stray far range images is taken by incrementing the rotation angle of from a close approximation of the object’s real surface. A range image is taken through a tion of the geometric model.14. each sequence of cell images corresponds to one M × 3N matrix 15 X.1. According to the theory of photo. and to the size of their triangular patch. and the matrix G. three each sequence of cell images. the error per pixel versus the number of dimensions of the eigenspace. ber of dimensions of eigenspace should differ for each cell.

Furthermore. we took thirty images of a real ob- in Figure 5. face like the raccoon in Figure 5. cent ordinary laser range finders return the power of laser 0-7695-0164-8/99 $10. By synthe- 3.3:1 5. This is caused by the difference of data types: the cell image sequences derived from the input 4. The values of this essential tory realistic. the pro. the threshold of the eigenratio for each cell.999 as accomplished in these eigenspaces. so that synthetic from the number of dimensions. polated projection points for each position of the real object face properties. higher compression ratio. it causes loss in compression ratio.34:1 3. that are not cap- sion ratio of the cell image sequences. We are now investigating a method to Table 1. we are ages of an object from a sequence of range and color im- working on implemention of vector quantization of com. the synthesized results would be satisisfac- real data size on the computer. while the data to be stored in eigenspaces. i. Taking this into pression ratio described in Table 1 is also the compression account. The square points illustrated in Figure 6 in- The average number of dimensions of eigenspace used. for 3◦ degrees rotation by each step. highlights. It is impossible to interpolate The compression ratio defined in Eq. Compression Ratio sizing images using these interpolated projections.00 (c) 1999 IEEE . interpolation between each input image can be Figure 5 shows the images synthesized by using 0.54:1 the cell images are determined to a fixed number with regard Essential compression ratio 8. Al- (MB) (357:42) (319:96) though this enables us to synthesize virtual images in higher Average error per pixel 1. To accomplish the alignment proce- In this paper. The number of dimensions. ject as the input image sequence by rotating the object at tinguishable from the input images shown in the left side. it can be applied to objects with rough sur.56 18. Yet. This cell-adaptive dimen- sion method can be implemented because our method deals Once the input color images are decomposed into a set with the sequence of input images as with small segmented of sequences of cell images. determine the resolution of cell images adaptively to thier pression ratio and the average error per pixel corresponding triangular patch size. we have proposed the Eigen-Texture jections and eigenvectors. Conclusions and Future Work color image sequence are represented with unsigned char type.4.3. but its value is computed from the in eigenspaces.e. we are not computing the compression ratio dure of the 3D model and color images.14 is the compres- the appearance features. mesh generation of the 3D model with regards to the color attributes on its surface is under our consideration. along with improvement on its constraint on the im- pressed stored data. are represented with f loat type. age acquiring system. for each eigenvector and obtained the interpolated projec- tion points denoted by plus marks in Figure 6. the size 3. ages. eigenspaces. the size of each cell image images interpolated from very sparse input sequence tend and the number of input color images. on the whole. construct the synthesized images can be optimized by using these cell-adaptive dimensions. the dicate projections of each cell image in eigenspace corre- compression ratio and the average error per pixel are sum. which is difficult with we interpolated the projection of the input image sequence model-based methods. object images whose pose were not captured in the input color im- ages could be synthesized. As can be seen As an experiment. As a practical matter. typical objects with shiny surfaces like the cat in ratio between the cell image sequence and the data size to Figure 5 could be compressed about 80:1 with interpolation be stored in eigenspaces.14. the resolution of Compression ratio 15. alignment of 3D model and in eigenspaces because an accurate comparison cannot be color images by maximizing the mutual information. the com. the results described in the right side are indis.32:1 to the largest triangular patch on the 3D model surface. Because our method does not assume these projected points in the eigenspace. to derive of the results. By interpolating marized in Table 1. compression ratio is relatively lower than the compression ratio computed by Eq.2 quence into a set of cell image sequences. we use the method between the input image sequence and the data size stored originally designed by Viola.34 resolution compared to the input images. which is computed tured in the input color image sequence. As re- accomplished due to the resolution problem of cell images.46 1. The essential com- to lack those photorealistic characterisitcs. Cat Raccoon When Eigen-Texture method converts the input image se- Average number of dimensions 6. method as a rendering method for synthesizing virtual im- To avoid this loss in compression due to data types. we obtained inter- any reflectance model and does not have to analyze the sur. sponding to each pose of the real object. Interpolation in Eigenspace of the database can be reduced. 12◦ by step. In this paper. and projected onto thier own cell images.

As a practical application of our method. With this improvement on the alignment problem. such as the buddha in Kamakura city. 2 We acquired the range images of the buddha by a time-of-flight laser range scanner Cyrax2400. our method has gained flexibility in application as well as the ability to be applied to a wide variety of objects and high compression ratio. it is becoming relatively easy to obtain accurate range data of large statues and buildings. Japan. and thereby archive Figure 6. G plotted in eigenspace (the first the geometric and photometric properties of them for the three eigenvectors are depicted). we use this information to raise the robustness of the alignment. we are planning to a scan heritage objects in Japan and Asia. Right: Synthesized images (by using cell-adaptive dimensional eigenspaces). reflectance at each point scanned by the laser. preservation of historic resources. Figure 7 shows an intensity image and two range images with the reflectance power attribute of the 13m tall bud- dha sitting in the city of Kamakura. Left: Input color images. Inc. By using recent products in the field of laser range finders 2 . 0-7695-0164-8/99 $10.Figure 5.00 (c) 1999 IEEE . a product of Cyra Tech.

Shapiro. Chen. Cohen. Sato. Dept.. In Image Understanding Systems and Industrial Ap- puter Graphics Proceedings. 1995. Williams. Sato. Levoy and P. View-based Rendering: Visualizing Real Ob- jects from Scanned Range and Color Data. Alignment by Maximization of Mutual Information. Geometry and Photometry in 3D Visual Recog- nition. Aug. ACM SIGGRAPH 95. Aug. ACM SIG- [14] P. pages 379–387. 1994. ACM [2] S. Wheeler. 1997. Viola. 97(324):29–36. Chihara. In Sixth Interna- and KL-expansion of Multiple Lightsource Image. This of reflection. In Com. Proc. Quicktime VR: An image-based approach to vir- 1992. Object shape and re- Research (B). T. pages 1041–1046. View. View interpolation for image SIGGRAPH 94. and K. pages 136–143. Sato. nical Report of IEICE PRMU97-115. Jun. Jul. pages 618–624. pages 279–288. Brain and Cognitive Science. pages 657–661. [10] Y. In Computer Graphics Proceedings. PhD thesis. Aug. 11(11):2990–3002. GRAPH 93. Szeliski. MIT. 1996. Gortler. [4] K. flectance modeling from observation. 1996. H. tual environment navigation. R. Temporal-color space analysis Koutokuin for their help in scanning the huge buddha. Turk and M. Modeling Geometric Structure and Illumination Light and Virtual Light based on Environment Observation Variation of a Scene from Real Images. In Proc.00 (c) 1999 IEEE . pages 29–38. 5. IEEE. 1998. and K. Levoy. Grzeszczuk. Jun. Wheeler. [3] S. Aug. 09450164. ACM SIG- for Modeling 3D Objects from Multiple Range Images. Acknowledgements [9] K. Consensus Surfaces Lumigraph. Composition of Real [17] Z. Pattern Analysis and Machine Intelligence. synthesis. [1] S. Annual Technical Symp. IEEE. pages 311–318. Ikeuchi and K. Cohen. In Computer Graphics Proceedings. In Computer Graph- ics Proceedings. ACM SIGGRAPH 97. work was partially supported by the Ministry of Education. Jul. [8] K. 0-7695-0164-8/99 $10. 1998. tional Conference on Computer Vision. 1987. pages plications. Sato. [5] M. Nov. Range-imaging system utilizing nematic liquid crystal mask. Y. M. Light Field Rendering. Sato and S. Ikeuchi. SPIE 22nd 31–42. Chen and L. Hoppe. Grant-in-Aid for Scientific [11] Y. The [15] M. Nishino. 1978. Pulli. K. Aug. References 1997. PhD thesis. Journal of the Optical Society of America. Inokuchi. Woodham. Aug. pages 43–54. In Proceedings of 8th Eurographics Workshop on Rendering. Stuetzle.. [7] K. MIT AI Lab. [16] R. Determining Reflectance Prop. range images. and K. Sports and Cultrure. and K. Oct. 1999. 1999. [12] A. Hanrahan. 1993. nique for Determining Surface Orientation from a Single 13(11):1139–1153. 1995. and M. Ikeuchi. Eigen-Texture Method: Appearance Compression based on 3D Model. 1991. In Computer Graphics Proceedings. Sixth International Conference on Computer Vision. Science. 1994. volume 1. erties of an Object Using Range and Brightness Images. Sato. Shashua. Ikeuchi. M. 1997. volume 155. Photometric Stereo: A Reflectance Map Tech- IEEE Trans. Y. pages 917–924. Duchamp. R. Ikeuchi. L. Tech. Zippered polygon meshes from ceedings. of Computer Vision and Pattern Recognition ’99. Sato and K. and W. In GRAPH 96. [6] A. Jun. ACM SIGGRAPH 96. A color image and range image with reflectance attribute of the huge buddha in Kamakura city. In Computer Graphics Pro- [13] G. Zhang. Figure 7. Matsui. In First International Confer- Special thanks to CAD Center Corporation and ence on Computer Vision.