Professional Documents
Culture Documents
084581
Reference to this paper should be made as follows: Ballester, A., Parrilla, E., Piérola,
A., Uriel, J., Pérez, C., Piqueras, P., Nácher, B., Vivas, J. A., and Alemany, S. (2016)
‘Data-driven three-dimensional reconstruction of human bodies using a mobile phone
app’, Int. Int. J. Digital Human, 1, No. 4, pp.361–388
Biographical notes:
Alfredo Ballester is a Research Engineer at IBV. His research interests include 3D
body scanning, digital human models, ergonomics and product design.
Dr. Eduardo Parrilla is a Research Engineer at IBV. His research interests include 3D
body scanning, image processing and computer graphics applied to anthropometry
and movement analysis.
Ana Piérola is a Data Scientist at IBV. Her research interests include statistical
analysis and data modelling of anthropometric data.
Jordi Uriel is a Research Engineer at IBV. His research interests include 3D computer
graphics.
Cristina Pérez is a Data Scientist at IBV. Her research interests include statistical
analysis of anthropometric data and image processing.
Paola Piqueras is a Research Engineer at IBV. Her research interests include analysis
of anthropometric data and its application to ergonomic design.
Julio A. Vivas is a Software Developer at IBV. His research interests include the
development of user interfaces and prototype mobile applications and webservices
related to anthropometric and ergonomic data management.
1 Introduction
The access to reliable anthropometric information of a person (i.e. body dimensions and 3D
shape) has multiple applications in different industries, in particular, in those related to health
and to the ergonomic design of wearable products such as clothes, workwear and orthotics,
among others (Dekker et al., 1999; D’Apuzzo, 2006; Lin et al., 2002; Treleaven and Wells,
2007; Wang et al., 2006). Some of the applications to wearables are custom-made products,
size recommendations, fitting predictions and virtual try-on simulations (Gill, 2015). Such
applications constitute a critical ingredient for the digital transformation of these industries,
namely for changing how these products are sold, produced and distributed.
Nowadays, there is a wide choice of 3D whole body scanners using different acquisition
technologies (Daanen and Ter Haar, 2013; Daanen and van de Water, 1998) at prices ranging
from tens to hundreds of thousands of dollars (e.g. 3DMDbody, Botspot, Ditus, Fit3D,
IIIDbody, Intellifit, Shapify Booth, SizeStream, Styku, Symcad, Texel, [TC]2, Vitus, etc.).
These products are mainly intended for professional or retail use and come with specialised
3D data processing software adapted to specific uses (e.g. body modelling, feature detection,
body measurement extraction, single surface reconstruction, etc.). Since the release of
software for the extraction of body measurements from 3D scans, its reliability has been
quantified in different studies addressing its precision (Bradtmiller and Gross, 1999; Dekker,
2000; Lu and Wang, 2010; Robinette and Daanen, 2006) and its accuracy compared to
traditional measuring methods (Bradtmiller and Gross, 1999; Dekker, 2000; Dekker et al.,
1999, Han et al., 2010; Kouchi, 2014; Kouchi and Mochimaru, 2005; Lu and Wang, 2010; Lu
et al., 2010; Paquette et al., 2000). Some of these studies gave place to compatibility
specifications between these two methods included in ISO 20685:2010.
Despite the potential benefits that individual 3D information may bring to the e-commerce
(off-the-shelf and made-to-measure orders) of wearable products, 3D body scanners have not
yet been widely used as consumer goods or as typical in-store appliances in small brick-and-
mortar retail outlets because of their cost, which is beyond the price of the most common
home appliances, the dedicated space they need, which is typically more than 2x2x2m, and, in
the case of off-the-shelf products, due to the lack of reliable product size guides and
prediction software (Alemany et al., 2013).
Over the past few years, the appearance of depth sensor peripherals for laptops, game stations,
televisions and tablets (e.g. Kinect, Realsense, Xtion, Structure, etc.) has brought 3D scanning
closer to home users. Yet, these solutions are intended for general purpose and do not include
specialised software for getting the anthropometric information required to create custom-
made products, make size suggestions, predict fitting or virtually simulate the try-on of
wearables (i.e. body dimensions and 3D models).
On the other hand, the availability of large-scale anthropometric surveys using 3D body
scanning technologies (Alemany et al., 2010; Ballester et al., 2015b; Bong et al., 2014;
Bougourd, 2005; Charoensiriwath and Tanprasert, 2010; Cools et al., 2014; Gordon et al.,
2011, 2015; Istook, 2008; Kulkarniet et al., 2011; Robinette at al., 1999; Seidl et al., 2009;
Shu et al., 2015) has enabled the use of data-driven technologies for the prediction of 3D body
shapes from partial body data inputs such as 1D measurements (Allen et al., 2003; Hasler et
al., 2009; Seo and Magnenat-Thalmann,2003; Wuhrer and Shu, 2012) or 2D images (Chen
and Cipolla, 2009; Guan et al., 2009; Hasler et al., 2010; Parrilla et al., 2015; Saito et al.,
2011).
According to Bradtmiller and Gross (1999), the reliability of the 1D measurements (e.g. waist
girth, hip girth, chest girth or arm length) required by the fashion industry for sizing and
fitting applications is 0.6-1.3cm. Achieving this requires certain skills (Gordon et al., 1989;
Lohman et al., 1992) that an untrained user following basic instructions regarding how to take
self-measurements at home does not typically have. According to Yoon and Robert (1994),
self-measurements for clothing orders have an average absolute error ranging from 2 to 6 cm,
depending on the measurement. These values are in line with the extensive review of Verweij
et al. (2013) on the measurement error of waist circumference for health applications, which
concluded that it can be up to 15cm. In contrast, 2D images can be obtained using a wide
range of electronic consumer goods that most users already have at home (e.g. digital
cameras, phones, laptops, tablets, televisions, etc.) and they just need basic skills and
instructions to achieve the right 3D reconstruction. Moreover, shape information contained in
2D images is richer than 1D measurements because it includes shape and posture information
provided by body outlines (e.g. shape of hips, legs or shoulders, and the belly, cervical, dorsal
and lumbar curves among others).
reconstruct shape information, with some useful information contained in the silhouettes
being missed.
This paper describes a method employed for the 3D reconstruction of human body shapes—in
particular, children—from two images of a person taken with a smartphone. It uses a database
of registered 3D body scans that is parameterised in shape and posture using PCA. The
purpose of our study was to demonstrate that it is feasible to obtain consistent anthropometric
information (i.e. body dimensions and 3D shapes) that could be used by a non-expert user at
home using a widely available electronic consumer device, i.e. smartphone or tablet equipped
with a camera. Our method does not require prior camera calibration. It is possible to estimate
an initial calibration from the smartphone sensors and the person’s reported height, which can
be optimised during the reconstruction process. Section 2 of this paper describes the proposed
method for the 3D reconstruction process from 2D images. Section 3 describes the results
obtained in three validation studies including synthetic bodies, 1:10 scale mini-figurines and
real subjects. Section 4 discusses our results in relation to other studies and Section 5 includes
the conclusions.
The proposed method is based on segmenting the body outlines from two images of a person
and then optimising a parameterised body shape model until the outlines of its projections
match the segmented outlines from the images. This process consists of five steps (Figure 1).
The following subsections explain in detail the parametrisation of the 3D body model used
and the five steps of the process.
Figure 1. Steps in the proposed method for the 3D reconstruction of human bodies from 2D images
indicating the main inputs and outputs of each step including outcomes (in bold) and by-
products (in italics)
A database of 761 children aged 3 to 12 years-old in standing posture was gathered using a
Vitus XXL scanner from Vitronic. For each of the scans, 35 body landmarks were
interactively identified using Anthroscan software from Human Solutions. All of the scans
were registered to a common parameterisation using an adaptation of different template fitting
approaches (Allen et al., 2003; Amberg et al., 2007; Sumner and Popovich, 2004). The
parameterised set of scans is homologous (Bookstein, 1991) and shares the same topology,
which means that they have the same number of vertices/faces and that the vertices/faces have
one-to-one correspondence (Figure 2). The template mesh model used consisted of circa 50k
vertices and 99k faces.
Figure 2. Close up images comparing the raw scan mesh (left) with the registered mesh (right)
The parameterised database should be in the same posture as the theoretical posture that users
should follow for the front photograph. In our case, subjects adopted a standing posture as
described in ISO 20685:2010, with their feet parallel and hips wide apart, their arms extended
and open at ~60º, and fists closed with the hand dorsum pointing outwards.
Generalised Procrustes Alignment (Gower, 1975) was applied to the resulting database to
obtain a rigid alignment using rotations and translations. PCA was then applied to the aligned
database to obtain a parameterised shape model where each body shape was represented by a
vector of 60 principal components, which explained more than 90% of the total variance. In
this way, we obtained a compact parametric model describing the shape space of children’s
bodies which is able to cope with the shape and posture variability expected from the
photographs.
Prior to the picture taking process, the users are provided with illustrated instructions
indicating the right background, lighting conditions, hair and attire and some examples of
what not to do. Specifically, users were requested:
Front and side outline guides (Figures 3 and 4) are provided to facilitate the picture taking
process for the users (i.e. posture adoption and distance to the photographed user). These
outlines are also used as initial solutions for the segmentation process. The outline guides are
generated using the children’s age, gender and self-reported weight and height. These are used
as inputs for a Partial Least Square (PLS) regression (Geladi and Kowalsky, 1986; Wold,
1985), which relates these four parameters with a PCA of body outlines so that the outline
guides correspond to the actual body proportions of the children.
Self-reported height was also used as the image calibration parameter in the 3D reconstruction
step. Every time a picture is taken, the Field Of View (FOV) of the phone camera and the
gravity sensor information are recorded along with the image to be used in the optimisation
process.
Figure 3. Outline guides to facilitate the picture taking process for users
Figure 4. From left to right: raw image, raw image with guiding outline, image where the body
outline is identified and the actual body outline used as input for the 3D reconstruction
The body outline is extracted from the front and side photographs using an adaptation of the
Grabcut algorithm (Rother et al., 2009) to segment the body figure from the background. In
this process, the guiding outlines are used as the departing point for the process (Figure 4).
The focal length of the camera in pixel units is estimated by basic trigonometry using the
camera FOV and the dimensions in pixels of the image. Camera rotation is estimated from the
gravity sensor of the phone, and the camera position is estimated from the height provided by
the person and assuming that the camera is pointing to the centre of the person. These
parameters along with the body outlines are used as inputs for the 3D reconstruction process.
The optimisation consists of searching the PCA shape space and finding the 3D body shape
which best matches the two body outlines by minimising the distance between the projected
outlines of the 3D model and the target outlines extracted from the 2D images (Figures 5-6).
The 3D reconstruction departs from an estimated body shape obtained from the PCA using
PLS regression from age, gender, weight and height. Optimisation is conducted by iteratively
modifying the PCA scores, scale and the extrinsic camera parameters (i.e. rotation and
translation). At each iteration, a simple projection matrix is computed for each view using the
focal length, the image size and the extrinsic camera parameters. By using this projection
matrix, the frontal and sagittal 2D outlines are obtained from the current 3D body and the
distance from the actual outlines to the projected ones is calculated. To make the process
faster, within every iteration, the vertices that describe the outline of the projected shape are
computed, and then distances are minimised using explicit gradients (Zhu et al., 1997). After
several minimisations, the vertices that described the outline of the projected shape will no
longer describe it accurately, and therefore, a new set of vertices defining the outlines is used
in the next iteration.
In order to facilitate convergence, several body features are automatically identified and used
to guide the process (i.e. 26 in the frontal projection and 8 in the sagittal projection; Figures 5
and 6 respectively). The relative weight of the landmarks, in relation to the rest of the points
constituting the outline, descends at every iteration. The process converges after around ten
iterations.
Once the 3D bodies are obtained, a set of 36 body dimensions, commonly used in wearable
product design, are computed. The body dimensions are obtained using a digital body
measuring tape developed to benefit from the homology of the parameterised meshes in
combination with geometrical searches such as finding minimum, maximum or average
coordinates, curvature, concavity or convexity in the surface, or prominent points in specific
projections among others. The measurement definitions were implemented according to ISO
8559:1998 and ISO 7250-1:2008.
Figure 7. Examples of registered body models measured with the digital body measuring tape
the second experiment, the precision of the method was evaluated by repeatedly
reconstructing and measuring several physical manikins following a similar procedure to
those used by Dekker (2000), Lu and Wang (2010), and Robinette and Daanen (2006).
Finally, real children were reconstructed to determine the accuracy of the method compared to
a high-resolution full body scanner.
Prior to the launch of the experimental studies, a phone/tablet app prototype for Android 4.2+
was implemented. The app included a user interface for entering input data (age, gender
weight and height), and showed instructions for the picture taking process. The image
segmentation algorithm was also implemented inside the app. A prototype webservice was
implemented for computing the remote 3D reconstruction. The app was temporarily uploaded
to Google PlayTM to facilitate its update and distribution among the testers.
Figure 8. Sample of synthetically created body shapes of children for the experimental study
Using the PCA shape space of children, 165 synthetic 3D bodies of children were generated
ensuring that the space of shapes was fully represented (Figure 8). Frontal and sagittal
outlines were obtained using a projection matrix computed from typical mobile phone camera
parameters. The measurements obtained from the reconstructed bodies were compared to the
actual measurements of the synthetic models using the same digital measuring tape method.
To assess the error of our method, we used the mean differences (MD) and mean absolute
differences (MAD) as proposed by Gordon et al. (1989). MAD quantifies the accuracy while
MD provides richer information about the bias and dispersion (confidence intervals) of the
measurement errors. MD, MAD and relative MAD (MADrel) are defined as
1
𝑖 − 𝑚𝑖
𝑀𝐷 (𝑚𝑚) = 𝑛 ∑𝑖 𝑚𝑟𝑒𝑐 𝑠𝑦𝑛 ,
𝑖1 𝑖
𝑀𝐴𝐷 (𝑚𝑚) = 𝑛 ∑𝑖|𝑚𝑟𝑒𝑐 − 𝑚𝑠𝑦𝑛 |,
𝑖 −𝑚𝑖
|𝑚𝑟𝑒𝑐
1 𝑠𝑦𝑛 |
𝑀𝐴𝐷𝑟𝑒𝑙 = 𝑛 ∑𝑖 𝑖 .
𝑚𝑠𝑦𝑛
Table 1 summarises the results of the MD and MAD for the 36 measurements of the 3D
reconstruction of synthetic models. The results are compared to the synthetic reconstructions
obtained by Boisvert et al. (2013) and show that the MAD is below 17 mm for all
measurements and the relative MAD is below 5%. The MD is within ±11 mm in 35 body
measurements and only exceeds this value for 7CV to wrist length (17mm). Figure 8 shows
the MD and its confidence interval (CI) at 95% for a set of nine measurements covering
different portions of the body relevant for garment design, construction and fit. The CI of the
MD for all the body measurements is significantly small at 1-3 mm (Figure 9).
Table 1. Accuracy of the measurements obtained from the 3D reconstructions of synthetic models
using the proposed method compared to results obtained by another study Boisvert et al.
(2013). MD ± half of the CI at 95%. MD and MAD in millimetres (mm)
The synthetic and reconstructed body shapes were also compared qualitatively. Figure 10
compares the 6 synthetic models corresponding to different genders, age, body mass index
(BMI) and their reconstructions. It shows that 3D reconstructed shapes of the children are
perceptually accurate compared to the synthetic children.
Figure 10. Comparison of six synthetic models and their respective 3D reconstructions: for each pair
of bodies, the one on the left is the synthetic model and the on the right corresponds to the
3D reconstruction using the proposed method
Y09G
Y10B
Y12G
Y12B
This experimental study aimed to determine the precision of the full pipeline of the proposed
method when reconstructing in 3D exactly the same human shapes repeatedly in order to
remove the influence of subjects’ postural changes and breathing. In similar studies involving
adults, a single life-size manikin is typically used representing the average male or female
proportions. In our case, we aimed to evaluate the precision of the proposed method with
several synthetic shapes of children representing the boundaries of the body shape space.
Since it would have been too expensive for the study to order six life-size manikins with non-
average proportions, their digital models were generated and then manufactured using a Solid
Laser Sintering (SLS) machine from EOS as 1:10 scale mini-figurines (Figure 10). Each of
the figurines was photographed and reconstructed 10 times at a booth with a sharply
contrasting background (Figure 11). Since height is the calibration parameter, the 3D
reconstructed shapes of the children and the measurements were obtained at a 1:1 scale and
not at figurine scale. It should be noted, that in this experiment, the error of the angular
estimation of the projection was 10 times higher due to the scale factor.
Figure 11. 1:10 scale mini-figurines (left), photo booth (centre) and picture taking process (right)
To evaluate the error in measurement estimation, MAD for repeated measurements was
defined as
1 1 𝑟𝑖 −1
𝑀𝐴𝐷 (𝑚𝑚) = 𝑛 ∑𝑖 (𝑟𝑖) ∑𝑠=1 ∑𝑟𝑡=𝑠+1
𝑖
|𝑚𝑠𝑖 − 𝑚𝑡𝑖 | ,
2
where 𝑛 is the number of figurines and 𝑟𝑖 is the number of repetitions for figurine 𝑖. In this
study (10 repetitions), 𝑟𝑖 = 10 for all 𝑖.
Table 2 shows the MAD of the figurine reconstructions along with the results of precision for
our high-resolution body scanner (34 children scanned 4 times in slightly different arm
postures) and other results for similar body scanners from the literature (Dekker, 2000; Lu
and Wang, 2010; Robinette and Daanen, 2006). It shows that that absolute MAD is below 8
mm for all the measurements except for shoulder width (11mm), and that relative MAD is
below 5% for all the measurements except for scye depth (6%), shoulder length (5%),
shoulder width (4%) and bi-nipple distance (4%).
The surface-to-surface average distance per vertex for the 3D reconstructions of each
synthetic model after GPA was calculated (Figure 12). The average MAD per vertex was 2.1
mm. The highest average error per vertex was far below 10 mm and was located around the
crotch landmarks and the tip of the head. The 3D reconstructions of the 6 synthetic body
shapes were also compared visually (Figure 13), showing that 3D reconstructed bodies are
perceptually accurate compared to the synthetic children and the mini-figurines.
Table 2. Precision of the proposed method for the repeated measuring of 1:10 scale figurines with
the proposed method compared to results obtained for life size physical manikins (Lu and
Wang, 2010) and subjects (Bradtmiller at al., 1999; Dekker, 2000; Lu and Wang, 2010;
Robinette and Daanen, 2006). MAD in millimetres (mm).
Figure 12. Surface-to-surface average distance per vertex mapped over an average child’s shape
This experimental study aimed to determine the accuracy of the full pipeline of the proposed
method by investigating it under real use conditions. 34 children aged 3 to 12 y.o. and their
parents participated in this experimental study. The children were scanned with a high
resolution body scanner (Vitus XXL) and measured using our data-driven 3D reconstruction
app. To evaluate the measurement error we used the MD and MAD as proposed by Gordon et
al. (1989):
1 𝑖 𝑖
𝑀𝐷 (𝑚𝑚) = 𝑛 ∑𝑖 𝑚𝑟𝑒𝑐 − 𝑚𝑠𝑐𝑎𝑛 ;
𝑖 1 𝑖
𝑀𝐴𝐷 (𝑚𝑚) = 𝑛 ∑𝑖|𝑚𝑟𝑒𝑐 − 𝑚𝑠𝑐𝑎𝑛 |;
𝑖 −𝑚𝑖
|𝑚𝑟𝑒𝑐
1 𝑠𝑐𝑎𝑛 |
𝑀𝐴𝐷𝑟𝑒𝑙 = 𝑛 ∑𝑖 𝑖 .
𝑚𝑠𝑐𝑎𝑛
Table 3 shows the MD, MAD and MADrel for the 36 measurements. MADrel in real use
conditions is below 7% for all the measurements except for shoulder width (9%), shoulder
length (13%), scye depth (10%) and crotch lengths (10%). Figure 14 illustrates the MD and
the confidence interval at 95% for eight selected measurements. The MD values of these 8
primary measurements lie within ±1 cm. The 3D reconstructed body shapes were also
assessed qualitatively by comparing them with the body scans (Figure 15).
Figure14. MD and CI95 in mm for a selected set of eight measurements for the real children
Figure 15 For each pair of 3D bodies, the one on the left (golden) is the registered 3D scan of the
child and the one on the right (silver) corresponds to the 3D reconstruction using the
proposed method.
Table 3. Accuracy of the proposed method with real subjects comparing between measurements made
automatically on body shape reconstructed from 2D images and measurements made automatically on
registered body scans of the subjects. Mean difference (MD) and Mean Absolute Difference (MAD)
in millimetres (mm) and Relative MAD (MADrel) expressed as a percentage.
1
Mean Difference (MD) ± half of the Confidence Interval (CI) at 95%
4 Discussion
This paper presents a new method for the 3D reconstruction of human bodies from two
photographs that can be implemented on a smartphone. Compared to other methods, our 3D
reconstruction does not require an initial calibration because the calibration can be optimised
using the sensor information of the smartphone (orientation and camera parameters).
Moreover, it is applied to the reconstruction of the 3D representations of children and
includes the modelling of the space of body shape. The method was tested with synthetic
models, mini-figurines and real children covering a wide variety of ages, body sizes, and body
shapes.
The study of accuracy with synthetic children enabled us to isolate and draw conclusions on
the optimisation of the 3D reconstruction step (Figure 1). This study revealed some
limitations of the proposed method. The bias in the measurements (Table 1) is probably due to
an insufficient correspondence between the feature points in the 2D images and the landmarks
used in the mesh registration process. This bias could be corrected in order to improve the
accuracy of the results. The highest MD corresponds to the 7CV to wrist length, which is
defined over the surface shoulder and the arm. Artefacts of the learned database related to the
arm roll, elbow flexion and location of the acromion features may have affected the accuracy
of this measurement. Our accuracy study with digitally extracted silhouettes followed a
similar methodology to that used by Boisvert et al. (2013), though we used synthetic models
of children instead of actual scans of adults. Our method provided better MAD results, except
for head girth and shoulder width.
Second, our precision study followed a similar methodology to that used by Lu and Wang
(2010) but used six (child) bodies of different ages situated in the boundaries of the shape
space (Figures 11 and 13) instead of a single average (adult) manikin. We also used 1:10 scale
figurines instead of life-size models. The use of 1:10 scale figurines was possible because the
height of the photographed subject is the image-calibrating element in the 3D reconstruction
process. The measurement errors in this study (Table 2) indicate that the body area that
concentrates the lowest precision is also the shoulder region, as for the study of accuracy with
synthetic children. This tendency is also observed in the MAD values for high-resolution
body scanners, where the less precise measurements are shoulder width (13mm) and scye
depth (7mm). These results are in accordance with the study of landmark errors and accuracy
of traditional methods performed by Kouchi and Mochimaru (2011) where the acromion and
armpits were the regions that concentrated the lowest accuracy values in landmark location
and derived measurements.
We also compared our results to those of other studies of the precision of body measurements
derived from high resolution 3D scanners (Table 2), one with a life-size manikin (Lu and
Wang, 2010) and four with human subjects (Bradtmiller and Gross, 1999; Dekker et al., 2000;
Lu and Wang, 2010; Robinette and Daanen, 2006). As expected, the MAD values of the nine
comparable body measurements are slightly higher in our experiment. Nevertheless, our
results are very close to those of the reference studies with high resolution scanners, where
differences range from 1mm (cervical height, knee height, and arm length) to 5mm (waist
girth). In addition, the results would be expected to improve with life-size mannequins
because the 1:10 scale of the figurines introduces a 10 times higher camera orientation error in
relation to the orthogonality of front and side views. Due to the variability introduced by
breathing, soft tissue and posture, with real subjects, the error would be expected to increase
as it does in the study involving manikins of Lu and Wang (2010) to the four studies with
human subjects; namely, 0-2.5mm for waist girth, 4-5mm for chest girth, 0.5-4mm for hip
girth and 0-3mm for arm length.
Regarding 3D shape consistency in the figurine study (Figure 12), the regions around the
crotch and the tip of the head are were the highest error per vertex are concentrated. These
areas are more strongly affected by image artefacts due to contrast, lighting conditions and
occlusions. Furthermore, the fitting of the guiding outline is more difficult to achieve here.
The higher surface-to-surface error at the arms, which increases towards the fists (reaching
circa 6 mm), may indicate that the arm limb alignment between the 3D reconstructions is not
optimal due to slight differences in shoulder abduction/adduction. Both surface-to-surface
errors are also affected by the camera orientation error of this study.
Finally, an accuracy study was conducted with real children comparing our method with a
high-resolution 3D scanner (Table 3). The MD values of the 8 selected measurements lie
within ±1 cm, which could be a suitable range for made-to-measure, fit or sizing of wearables.
Table 3 and Figure 9 show a clear bias in the estimation of several measurements which could
be potentially corrected. This will be especially relevant for those biased measurements that
have lower accuracy such as shoulder width, shoulder length and crotch lengths. Analogously
to the other studies conducted, the shoulder region showed poorer results. This effect might be
increased in this study because the posture in the side photograph is slightly different to the
posture in the front one and in the body shape model (and thus in frontal and side projections
during optimisations) introducing a slight matching error due to differences in shoulder
posture (adduction/abduction, flexion/extension and rotation).
Since the calibrating parameter of the process is self-reported height, users were warned in the
prototype app that the reliability of the reconstruction depends on the accuracy of the height
value they introduced. In our experiment with children, no statistically significant differences
were found between the children’s height reported by parents and their height measured by an
expert. However, for other population groups, if the input height is biased, it should be
corrected.
The results of the running time of the 3D reconstruction algorithms (circa 25 seconds) showed
that our method is computationally efficient. Nevertheless, our implementation is not
optimised for efficiency and it still has a margin of improvement, for instance by reducing the
resolution of the template mesh used.
5 Conclusions
The data-driven 3D reconstruction of human bodies using a mobile phone app has been
proposed, implemented and demonstrated. Moreover, we determined the precision and the
accuracy (compared to a high-resolution 3D body scanner) of the prototype app.
This work has proved that it is feasible to provide realistic and perceptually accurate 3D
reconstructions of full bodies of children using a mobile phone app equipped with image
processing and 3D reconstruction software.
Despite the fact that the precision is slightly lower than that of high-end body scanners, it
can be acceptable for applications such as size recommendation, bespoke and made-to-
measure wearables (e.g. clothes, protective equipment or orthotics). Additional
experimental studies will however be necessary to determine the precision of the proposed
method with human subjects and to determine its accuracy compared with traditional
anthropometry so that we can position our method in relation to the literature (Bradtmiller
and Gross, 1999; Dekker, 2000; Han et al., 2010; Lu and Wang, 2010; Lu et al., 2010;
Paquette et al., 2000) or referred to maximum allowable errors (Table 4) for anthropometric
surveys (Gordon et al., 1989; ISO 20685:2010) or fit and sizing of apparel (Bradtmiller and
Gross, 1999).
Moreover, our solution can contribute to spreading the digitalisation of 3D bodies to any
home or point-of-sale, in particular by overcoming the barriers related to price, dedicated
space, availability and usability of the body measuring hardware.
These three aspects—the good precision of the measurements, the realistic body shape
representation and the possibility of using it at home—make the methods proposed
potentially suitable as user data input for size advice and online fit simulations of wearables
(Ballester et al., 2015a; D’Apuzzo, 2006; Gill, 2015), either as body measurements or even as
3D models. In this sense, the resulting 3D models are dense, homologous and watertight
representations of the human body which make it possible to develop interfaces to transfer the
geometry of the model efficiently and accurately to mesh topologies or models compatible
with the applications.
Acknowledgements
The authors would like to thank their colleagues Begoña Mateo, Juan Carlos González, Silvia San Jerónimo and
María Sancho for their participation in proposal writing, technology implementation and conduction of the user
testing.
Table 4. MAE for anthropometric studies with traditional methods established by ANSUR
(Gordon et al., 1989), MAE between measurements extracted from 3D scans and
traditionally measured values (ISO 20685:2010), and MAE estimated by tailors for fit
and fashion applications (Bradtmiller and Gross, 1999). MAD and MAEvalues in mm
References
4th International Conference on 3D Body Scanning Technologies, Long Beach, CA, USA,
November 2013.
Allen, B., Curless, B., and Popović, Z. (2003) ‘The space of human body shapes: reconstruction and
parameterization from range scans’ in ACM transactions on graphics, Vol. 22, No. 3, pp. 587-
594.
Amberg, B., Romdhani, S., and Vetter, T. (2007): “Optimal step nonrigid icp algorithms for surface
registration”, in Computer Vision and Pattern Recognition, 2007, IEEE Conference.
Ballester, A., Parrilla, E., Vivas, J. A., Piérola, A., Uriel, J., Puigcerver, S. A., Piqueras, P., Solves-
Camallonga, C., Rodríguez, M., González, J. C., and Alemany S. (2015a) ‘Low-Cost Data-Driven
3D Reconstruction and its Applications’, in 6th International Conference on 3D Body Scanning
Technologies, Hometrica Consulting, Lugano, Switzerland.
Ballester, A., Valero, M., Nácher, B., Piérola, A., Piqueras, P., Sancho, M., Gargallo, G., González, J.
C., and Alemany S. (2015b), ‘3D Body Databases of the Spanish Population and its Application
to the Apparel Industry’ in 6th International Conference on 3D Body Scanning Technologies,
Hometrica Consulting, Lugano, Switzerland.
Blanz, V., and Vetter, T. (1999) ‘A morphable model for the synthesis of 3D faces’ in SIGGRAPH 99:
Proceedings of the 26th annual conference on Computer graphics and interactive techniques, Los
Angeles, CA, USA, pp. 187-194.
Boisvert, J., Shu, C., Wuhrer, S., and Xi, P. (2013) ‘Three-dimensional human shape inference from
silhouettes: Reconstruction and validation’, Machine vision and applications, Vol. 24 No. 1, pp.
145-157.
Bong, Y. B., Merican, A. F., Azhar, S., Mokhtari, T., Mohamed A. M., Shariff A. A. (2014) Three-
Dimensional (3D) Anthropometry Study of the Malaysian Population’, in 5th International
Conference on 3D Body Scanning Technologies, Hometrica Consulting, Lugano, Switzerland.
Bookstein, F.L. (1997) Morphometric Tools for Landmark Data: Geometry and Biology, Cambridge
University Press, Cambridge, UK.
Botspot by Botspot GmbH, [online] http://www.botspot.de/ (accessed 15 December 2016)
Bougourd, J. (2005) ‘Measuring and shaping a nation: SizeUK’, in Int Conf on Recent Advances in
Innovation and Enterprise in Textiles and Clothing, Marmaris University, Istanbul, Turkey.
Bradtmiller, B., & Gross, M. E. (1999). 3D whole body scans: measurement extraction software
validation (No. 1999-01-1892). SAE Technical Paper.
Charoensiriwath S. and Tanprasert C. (2010) ‘An Overview of 3D Body Scanning Applications in
Thailand’, 1st International Conference on 3D Body Scanning Technologies, Lugano, Switzerland
Chen, Y., and Cipolla, R. (2009) ‘Learning shape priors for single view reconstruction’ In Computer
Vision Workshops, IEEE 12th International Conference, pp. 1425-1432.
Cools, J., de Raeve, A., and Bossaer, H. (2014) ‘The use of 3D anthropometric data for morphotype
analysis to improve fit and grading techniques’, in 5th International Conference on 3D Body
Scanning Technologies, Hometrica Consulting, Lugano, Switzerland.
D’Apuzzo, N. (2006) ‘Overview of 3D surface digitization technologies in Europe’. In Proceedings
SPIE, Vol. 6056, No. 605605, pp. 1-13.
Daanen, H. A., and Ter Haar, F. B. (2013) ‘3D whole body scanners revisited’, Displays, Vol. 34, No.
4, pp. 270-275.
Daanen, H. M., and van de Water, G. J. (1998) ‘Whole body scanners’, Displays, Vol. 19, No. 3, pp.
111-120.
Dekker, L. D. (2000) ‘3D human body modelling from range data’ PhD thesis, Doctoral dissertation,
University of London, London, United Kingdom.
Dekker, L., Douros, I., Buston, B. F., and Treleaven, P. (1999). Building symbolic information for 3D
human body modeling from range data. In 3-D Digital Imaging and Modeling, 1999.
Proceedings. Second International Conference on (pp. 388-397). IEEE.
DITUS MC from Human Solutions GmbH. [online] http://www.human-solutions.com/ (accessed 28
June 2016)
Fit3D. [Online] http://www.fit3d.com/
Geladi, P., and Kowalski, B. R. (1986). Partial least-squares regression: a tutorial. Analytica chimica
acta, 185, 1-17.
Gill, S. (2015) ‘A review of research and innovation in garment sizing, prototyping and fitting’,
Textile Progress, 47:1, 1-85, DOI: 10.1080/00405167.2015.1023512
Gordon, C. C., Bradtmiller, B., Churchill, T., Clauser, C. E., McConville, J. T., Tebbetts, I. O., and
Walker, R. A. (1989). ‘1988 Anthropometric Survey of US Army Personnel: Methods and
Summary Statistics’, Natick, MA: US Army Natick Research. Development and Engineering
Center.
Gordon C. C., Blackwell C. L., Bradtmiller B., Parham J. L., Hotzman J., Paquette S. P., Corner B. D.,
Hodge B. M. (2011) ‘2010 Anthropometric Survey of Marine Corps Personnel: Methods and
Summary Statistics’ NATICK/TR-11/017. Natick, MA: U.S. Army Natick Research,
Development, and Engineering Center.
Gordon C. C, Blackwell C. L, Bradtmiller B., Parham J. L., Barrientos P., Paquette S. P., Corner B.
D., Carson J. M., Venezia J. C., Rockwell, B. M., Muncher M., and Kristensen S. (2015) ‘2010-
2012 Anthropometric Survey of US Army Personnel: Methods and Summary Statistics’,
NATICK/TR-15/007. Natick, MA: U.S. Army Natick Research, Development, and Engineering
Center.
Gower, J. C. (1975). Generalized procrustes analysis. Psychometrika, 40(1), 33-51.
Guan, P., Weiss, A., Balan, O. and Black M. J. (2009) ‘Estimating human shape and pose from a
single image’, in International Conference on Computer Vision.
Han, H., Nam, Y. and Choi, K. (2010) ‘Comparative analysis of 3D body scan measurements and
manual measurements of size Korea adult females’, International Journal of Industrial
Ergonomics, Vol. 40, No. 5, pp.530–540.
Hasler, N., Ackermann, H., Rosenhahn, B., Thormahlen, T., and Seidel H.P. (2010) ‘Multilinear pose
and body shape estimation of dressed subjects from image sets’, In Conference on Computer
Vision and Pattern Recognition, San Francisco, CA, USA.
Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B. and Seidel H.P. (2009) ‘A statistical model of human
pose and body shape’ In P. Dutré and M. Stamminger, editors, Computer Graphics Forum,
volume 2.
IIIDbody from 4DDynamics. [online] http://www.4ddynamics.com/ (accessed 28 June 2016)
Intellifit from Intellifit pss, [online] http://intellifitpss.com/ (accessed 15 December 2016)
International Organisation for Standardisation (2008) ISO 7250-1:2008 “Basic human body
measurements for technological design” - Part 1: Body measurement definitions and landmarks.
International Organisation for Standardisation (1989) ISO 8559:1989 Garment construction and
anthropometric surveys-Body dimensions.
International Organisation for Standardisation (2010) ISO 20685:2010 3-D scanning methodologies
for internationally compatible anthropometric databases
Istook, C. L. (2008) ‘Three-dimensional body scanning to improve fit, in Advances in Apparel
Production’, C. Fairhurst, ed.,Woodhead Publishing, Cambridge, 2008.
Seo, H., and Magnenat-Thalmann, N. (2003) ‘An automatic modeling of human bodies from sizing
parameters’ in Proceedings of the 2003 Symposium on Interactive 3D Graphics, pp 19–26,
Monterey, CA, USA.
Seo, H., Yeo, Y. I., & Wohn, K. (2006). 3D body reconstruction from photos based on range scan. In
Technologies for e-learning and digital entertainment (pp. 849-860). Springer Berlin Heidelberg.
Shapify from ARTEC. [online] https://www.artec3d.com/es/hardware/shapifybooth (accessed 28 June
2016)
Shu, C., Xi, P., & Keefe, A. (2015) ‘Data processing and analysis for the 2012 Canadian Forces 3D
anthropometric survey’, Procedia Manufacturing, 3, 3745-3752.
SizeStream. [online] http://www.sizestream.com/ (accessed 28 June 2016)
Structure sensor for iPad. [online] http://structure.io/ (accessed 28 June 2016)
Styku. [online] http://www.styku.com/bodyscanner (accessed 28 June 2016)
Sumner, R. and Popovic, J., (2004) ‘Deformation Transfer for Triangle Meshes’, SIGGRAPH.
Symcad from Telmat Indutries. [online] http://www.telmat.com/activites_vision.php (accessed 28 June
2016)
TC2-19B from [TC]² Labs. [online] http://www.tc2.com/tc2-19b-3d-body-scanner.html (accessed 28
June 2016)
TC2-19R from [TC]² Labs. [online] http://www.tc2.com/tc2-19r-mobile-scanner.html (accessed 28
June 2016)
Texel from Texel Inc. [online] http://texel.graphics/ (accessed 15 December 2016)
Treleaven, P., & Wells, J. C. K. (2007). 3D body scanning and healthcare applications. Computer,
40(7), 28-34.
Verweij, L.M., Terwee, C.B., Proper, K.I., Hulshof, C.T. and van Mechelen, W. (2013) ‘Measurement
error of waist circumference: gaps in knowledge’, Public health nutrition, Vol. 16, No. 02,
pp.281–288.
VITUS bodyscan from Human Solutions. [online] http://www.human-
solutions.com/fashion/front_content.php?idcat=813&lang=7 (accessed 28 June 2016)
Wang, J., Gallagher, D., Thornton, J. C., Yu, W., Horlick, M., & Pi-Sunyer, F. X. (2006). Validation
of a 3-dimensional photonic scanner for the measurement of body volumes, dimensions, and
percentage body fat. The American journal of clinical nutrition, 83(4), 809-816.