You are on page 1of 24

Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.

084581

Data-driven three-dimensional reconstruction of human bodies


using a mobile phone app

Alfredo Ballester*, Eduardo Parrilla, Ana Piérola, Jordi Uriel, Cristina


Pérez, Paola Piqueras, Beatriz Nácher, Julio A. Vivas and Sandra
Alemany
Instituto de Biomecánica de Valencia
Universitat Politècnica de València, edificio 9C
Camino de Vera s/n, 46022 Valencia, Spain
Email: alfredo.ballester@ibv.upv.es
Email: eduardo.parrilla@ibv.upv.es
Email: ana.pierola@ibv.upv.es
Email: jordi.uriel@ibv.upv.es
Email: cristina.perez@ibv.upv.es
Email: paola.piqueras@ibv.upv.es
Email: beatriz.nacher@ibv.upv.es
Email: julio.vivas@ibv.upv.es
Email: sandra.aleman@ibv.upv.es
*Corresponding author

Abstract: The advances and availability of technologies for the acquisition,


registration and analysis of the three-dimensional (3D) shape of human bodies (or
body parts) are resulting in the formation of large databases of parameterised meshes
from which digital human body models can be derived. Such models can be used for
the data-driven reconstruction of parameterised human body shapes from partial
information such as one-dimensional (1D) measurements or (2D) images. In this
paper, we propose a new method for the reconstruction of 3D bodies from images
gathered with a smartphone or tablet. Moreover, the method is implemented into a
prototype app and tested at different levels through three experimental studies
including synthetic models, 1:10 scale figurines and real children. The results
demonstrate the feasibility of acquiring reliable anthropometric information easily at
home by non-experts. This method and implementation have great potential for their
application to the personalisation, size recommendation and virtual try-on simulation
of wearable products.

Keywords: 3D, data-driven reconstruction, body scanner, shape analysis, 2D images,


children, body measurements, anthropometry, PCA, digital human model, smartphone
app, databases, sizing, size recommendation, e-commerce, made-to-measure,
wearables.

Reference to this paper should be made as follows: Ballester, A., Parrilla, E., Piérola,
A., Uriel, J., Pérez, C., Piqueras, P., Nácher, B., Vivas, J. A., and Alemany, S. (2016)
‘Data-driven three-dimensional reconstruction of human bodies using a mobile phone
app’, Int. Int. J. Digital Human, 1, No. 4, pp.361–388

Biographical notes:
Alfredo Ballester is a Research Engineer at IBV. His research interests include 3D
body scanning, digital human models, ergonomics and product design.

Dr. Eduardo Parrilla is a Research Engineer at IBV. His research interests include 3D
body scanning, image processing and computer graphics applied to anthropometry
and movement analysis.

Ana Piérola is a Data Scientist at IBV. Her research interests include statistical
analysis and data modelling of anthropometric data.

Authors version – accepted manuscript before editor’s typesetting Page 1 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Jordi Uriel is a Research Engineer at IBV. His research interests include 3D computer
graphics.

Cristina Pérez is a Data Scientist at IBV. Her research interests include statistical
analysis of anthropometric data and image processing.

Paola Piqueras is a Research Engineer at IBV. Her research interests include analysis
of anthropometric data and its application to ergonomic design.

Beatriz Nácher is a Product Engineer at IBV. Her research interests include 3D


anthropometry methods and data analysis.

Julio A. Vivas is a Software Developer at IBV. His research interests include the
development of user interfaces and prototype mobile applications and webservices
related to anthropometric and ergonomic data management.

Sandra Alemany is a Research Engineer at IBV. She is the Head of the


Anthropometry Research Group. Her research interests include 3D body scanning,
ergonomics and product design.

1 Introduction

The access to reliable anthropometric information of a person (i.e. body dimensions and 3D
shape) has multiple applications in different industries, in particular, in those related to health
and to the ergonomic design of wearable products such as clothes, workwear and orthotics,
among others (Dekker et al., 1999; D’Apuzzo, 2006; Lin et al., 2002; Treleaven and Wells,
2007; Wang et al., 2006). Some of the applications to wearables are custom-made products,
size recommendations, fitting predictions and virtual try-on simulations (Gill, 2015). Such
applications constitute a critical ingredient for the digital transformation of these industries,
namely for changing how these products are sold, produced and distributed.

Nowadays, there is a wide choice of 3D whole body scanners using different acquisition
technologies (Daanen and Ter Haar, 2013; Daanen and van de Water, 1998) at prices ranging
from tens to hundreds of thousands of dollars (e.g. 3DMDbody, Botspot, Ditus, Fit3D,
IIIDbody, Intellifit, Shapify Booth, SizeStream, Styku, Symcad, Texel, [TC]2, Vitus, etc.).
These products are mainly intended for professional or retail use and come with specialised
3D data processing software adapted to specific uses (e.g. body modelling, feature detection,
body measurement extraction, single surface reconstruction, etc.). Since the release of
software for the extraction of body measurements from 3D scans, its reliability has been
quantified in different studies addressing its precision (Bradtmiller and Gross, 1999; Dekker,
2000; Lu and Wang, 2010; Robinette and Daanen, 2006) and its accuracy compared to
traditional measuring methods (Bradtmiller and Gross, 1999; Dekker, 2000; Dekker et al.,
1999, Han et al., 2010; Kouchi, 2014; Kouchi and Mochimaru, 2005; Lu and Wang, 2010; Lu
et al., 2010; Paquette et al., 2000). Some of these studies gave place to compatibility
specifications between these two methods included in ISO 20685:2010.

Despite the potential benefits that individual 3D information may bring to the e-commerce
(off-the-shelf and made-to-measure orders) of wearable products, 3D body scanners have not
yet been widely used as consumer goods or as typical in-store appliances in small brick-and-
mortar retail outlets because of their cost, which is beyond the price of the most common
home appliances, the dedicated space they need, which is typically more than 2x2x2m, and, in

Authors version – accepted manuscript before editor’s typesetting Page 2 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

the case of off-the-shelf products, due to the lack of reliable product size guides and
prediction software (Alemany et al., 2013).

Over the past few years, the appearance of depth sensor peripherals for laptops, game stations,
televisions and tablets (e.g. Kinect, Realsense, Xtion, Structure, etc.) has brought 3D scanning
closer to home users. Yet, these solutions are intended for general purpose and do not include
specialised software for getting the anthropometric information required to create custom-
made products, make size suggestions, predict fitting or virtually simulate the try-on of
wearables (i.e. body dimensions and 3D models).

On the other hand, the availability of large-scale anthropometric surveys using 3D body
scanning technologies (Alemany et al., 2010; Ballester et al., 2015b; Bong et al., 2014;
Bougourd, 2005; Charoensiriwath and Tanprasert, 2010; Cools et al., 2014; Gordon et al.,
2011, 2015; Istook, 2008; Kulkarniet et al., 2011; Robinette at al., 1999; Seidl et al., 2009;
Shu et al., 2015) has enabled the use of data-driven technologies for the prediction of 3D body
shapes from partial body data inputs such as 1D measurements (Allen et al., 2003; Hasler et
al., 2009; Seo and Magnenat-Thalmann,2003; Wuhrer and Shu, 2012) or 2D images (Chen
and Cipolla, 2009; Guan et al., 2009; Hasler et al., 2010; Parrilla et al., 2015; Saito et al.,
2011).

According to Bradtmiller and Gross (1999), the reliability of the 1D measurements (e.g. waist
girth, hip girth, chest girth or arm length) required by the fashion industry for sizing and
fitting applications is 0.6-1.3cm. Achieving this requires certain skills (Gordon et al., 1989;
Lohman et al., 1992) that an untrained user following basic instructions regarding how to take
self-measurements at home does not typically have. According to Yoon and Robert (1994),
self-measurements for clothing orders have an average absolute error ranging from 2 to 6 cm,
depending on the measurement. These values are in line with the extensive review of Verweij
et al. (2013) on the measurement error of waist circumference for health applications, which
concluded that it can be up to 15cm. In contrast, 2D images can be obtained using a wide
range of electronic consumer goods that most users already have at home (e.g. digital
cameras, phones, laptops, tablets, televisions, etc.) and they just need basic skills and
instructions to achieve the right 3D reconstruction. Moreover, shape information contained in
2D images is richer than 1D measurements because it includes shape and posture information
provided by body outlines (e.g. shape of hips, legs or shoulders, and the belly, cervical, dorsal
and lumbar curves among others).

The combination of 2D images and parameterised 3D models has been demonstrated to be a


precise technique for reconstructing 3D shapes (Boisvert et al., 2013). For human face
reconstruction, Blanz and Vetter (1999) proposed one of the first approaches to 3D
reconstruction from 2D images by using a large database of textured 3D faces registered to a
common parameterisation that was synthesised using Principal Components Analysis (PCA).
In the full body domain, Seo et al. (2006) introduced a method of body reconstruction based
on multiple photographs also using a parameterised database of 3D human body meshes with
a common topology and in a common posture synthesised using PCA. This process is
interactive and requires the manual indication of some feature points to ensure
correspondence across images. In order to automatize the process, some approaches have
been proposed. Lin and Wang (2011 and 2012) provided methods for the automatic feature
extraction from full body outlines and for the estimation of 3D human models from
measurements extracted from the features. In this approach, only these features are used to

Authors version – accepted manuscript before editor’s typesetting Page 3 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

reconstruct shape information, with some useful information contained in the silhouettes
being missed.

This paper describes a method employed for the 3D reconstruction of human body shapes—in
particular, children—from two images of a person taken with a smartphone. It uses a database
of registered 3D body scans that is parameterised in shape and posture using PCA. The
purpose of our study was to demonstrate that it is feasible to obtain consistent anthropometric
information (i.e. body dimensions and 3D shapes) that could be used by a non-expert user at
home using a widely available electronic consumer device, i.e. smartphone or tablet equipped
with a camera. Our method does not require prior camera calibration. It is possible to estimate
an initial calibration from the smartphone sensors and the person’s reported height, which can
be optimised during the reconstruction process. Section 2 of this paper describes the proposed
method for the 3D reconstruction process from 2D images. Section 3 describes the results
obtained in three validation studies including synthetic bodies, 1:10 scale mini-figurines and
real subjects. Section 4 discusses our results in relation to other studies and Section 5 includes
the conclusions.

2 Method for the data-driven 3D reconstruction of human bodies from 2D images

The proposed method is based on segmenting the body outlines from two images of a person
and then optimising a parameterised body shape model until the outlines of its projections
match the segmented outlines from the images. This process consists of five steps (Figure 1).
The following subsections explain in detail the parametrisation of the 3D body model used
and the five steps of the process.

Figure 1. Steps in the proposed method for the 3D reconstruction of human bodies from 2D images
indicating the main inputs and outputs of each step including outcomes (in bold) and by-
products (in italics)

Authors version – accepted manuscript before editor’s typesetting Page 4 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

2.1 Parameterised 3D body shape model of children

A database of 761 children aged 3 to 12 years-old in standing posture was gathered using a
Vitus XXL scanner from Vitronic. For each of the scans, 35 body landmarks were
interactively identified using Anthroscan software from Human Solutions. All of the scans
were registered to a common parameterisation using an adaptation of different template fitting
approaches (Allen et al., 2003; Amberg et al., 2007; Sumner and Popovich, 2004). The
parameterised set of scans is homologous (Bookstein, 1991) and shares the same topology,
which means that they have the same number of vertices/faces and that the vertices/faces have
one-to-one correspondence (Figure 2). The template mesh model used consisted of circa 50k
vertices and 99k faces.

Figure 2. Close up images comparing the raw scan mesh (left) with the registered mesh (right)

The parameterised database should be in the same posture as the theoretical posture that users
should follow for the front photograph. In our case, subjects adopted a standing posture as
described in ISO 20685:2010, with their feet parallel and hips wide apart, their arms extended
and open at ~60º, and fists closed with the hand dorsum pointing outwards.

Generalised Procrustes Alignment (Gower, 1975) was applied to the resulting database to
obtain a rigid alignment using rotations and translations. PCA was then applied to the aligned
database to obtain a parameterised shape model where each body shape was represented by a
vector of 60 principal components, which explained more than 90% of the total variance. In
this way, we obtained a compact parametric model describing the shape space of children’s
bodies which is able to cope with the shape and posture variability expected from the
photographs.

2.2 Picture taking process

Prior to the picture taking process, the users are provided with illustrated instructions
indicating the right background, lighting conditions, hair and attire and some examples of
what not to do. Specifically, users were requested:

 to wear tight clothing, underwear or swimwear;


 to wear flat coloured garments that contrast sharply with the background and the floor;
 to tie up their hair in a bun located in the back of the head; and
 to find an indoor location in which to take the pictures with a clear background and
uniform illumination, avoiding hard light sources.

Authors version – accepted manuscript before editor’s typesetting Page 5 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Front and side outline guides (Figures 3 and 4) are provided to facilitate the picture taking
process for the users (i.e. posture adoption and distance to the photographed user). These
outlines are also used as initial solutions for the segmentation process. The outline guides are
generated using the children’s age, gender and self-reported weight and height. These are used
as inputs for a Partial Least Square (PLS) regression (Geladi and Kowalsky, 1986; Wold,
1985), which relates these four parameters with a PCA of body outlines so that the outline
guides correspond to the actual body proportions of the children.

Self-reported height was also used as the image calibration parameter in the 3D reconstruction
step. Every time a picture is taken, the Field Of View (FOV) of the phone camera and the
gravity sensor information are recorded along with the image to be used in the optimisation
process.

Figure 3. Outline guides to facilitate the picture taking process for users

Figure 4. From left to right: raw image, raw image with guiding outline, image where the body
outline is identified and the actual body outline used as input for the 3D reconstruction

Authors version – accepted manuscript before editor’s typesetting Page 6 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

2.3 Image processing for outline segmentation

The body outline is extracted from the front and side photographs using an adaptation of the
Grabcut algorithm (Rother et al., 2009) to segment the body figure from the background. In
this process, the guiding outlines are used as the departing point for the process (Figure 4).

The focal length of the camera in pixel units is estimated by basic trigonometry using the
camera FOV and the dimensions in pixels of the image. Camera rotation is estimated from the
gravity sensor of the phone, and the camera position is estimated from the height provided by
the person and assuming that the camera is pointing to the centre of the person. These
parameters along with the body outlines are used as inputs for the 3D reconstruction process.

2.4 3D reconstruction and measurement extraction

The optimisation consists of searching the PCA shape space and finding the 3D body shape
which best matches the two body outlines by minimising the distance between the projected
outlines of the 3D model and the target outlines extracted from the 2D images (Figures 5-6).

Figure 5. Projected and target outlines and 3D shape at initial iterations

Figure 6. Projected and target outlines and 3D shape at final iterations

Authors version – accepted manuscript before editor’s typesetting Page 7 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

The 3D reconstruction departs from an estimated body shape obtained from the PCA using
PLS regression from age, gender, weight and height. Optimisation is conducted by iteratively
modifying the PCA scores, scale and the extrinsic camera parameters (i.e. rotation and
translation). At each iteration, a simple projection matrix is computed for each view using the
focal length, the image size and the extrinsic camera parameters. By using this projection
matrix, the frontal and sagittal 2D outlines are obtained from the current 3D body and the
distance from the actual outlines to the projected ones is calculated. To make the process
faster, within every iteration, the vertices that describe the outline of the projected shape are
computed, and then distances are minimised using explicit gradients (Zhu et al., 1997). After
several minimisations, the vertices that described the outline of the projected shape will no
longer describe it accurately, and therefore, a new set of vertices defining the outlines is used
in the next iteration.

In order to facilitate convergence, several body features are automatically identified and used
to guide the process (i.e. 26 in the frontal projection and 8 in the sagittal projection; Figures 5
and 6 respectively). The relative weight of the landmarks, in relation to the rest of the points
constituting the outline, descends at every iteration. The process converges after around ten
iterations.

2.5 Measurement of the 3D body

Once the 3D bodies are obtained, a set of 36 body dimensions, commonly used in wearable
product design, are computed. The body dimensions are obtained using a digital body
measuring tape developed to benefit from the homology of the parameterised meshes in
combination with geometrical searches such as finding minimum, maximum or average
coordinates, curvature, concavity or convexity in the surface, or prominent points in specific
projections among others. The measurement definitions were implemented according to ISO
8559:1998 and ISO 7250-1:2008.

Figure 7. Examples of registered body models measured with the digital body measuring tape

3 Results of the experimental studies

In order to demonstrate that it is feasible to obtain consistent and reliable anthropometric


information with the proposed method, three experimental studies were conducted. In the first
experiment, synthetic models were reconstructed and their measurements and body shapes
were compared following a similar procedure to the one proposed by Boisvert el al. (2013). In

Authors version – accepted manuscript before editor’s typesetting Page 8 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

the second experiment, the precision of the method was evaluated by repeatedly
reconstructing and measuring several physical manikins following a similar procedure to
those used by Dekker (2000), Lu and Wang (2010), and Robinette and Daanen (2006).
Finally, real children were reconstructed to determine the accuracy of the method compared to
a high-resolution full body scanner.

3.1 Implementation of the mobile phone app

Prior to the launch of the experimental studies, a phone/tablet app prototype for Android 4.2+
was implemented. The app included a user interface for entering input data (age, gender
weight and height), and showed instructions for the picture taking process. The image
segmentation algorithm was also implemented inside the app. A prototype webservice was
implemented for computing the remote 3D reconstruction. The app was temporarily uploaded
to Google PlayTM to facilitate its update and distribution among the testers.

3.2 3D Reconstruction of synthetic models

This experimental study aimed at determining the accuracy of the 3D reconstruction


algorithms under ideal input conditions (outlines and camera), leaving out the errors
introduced by the user and the image segmentation. Since the 3D reconstruction algorithm is
the core component, the results of this study are indicative of the potential of the proposed
method.

Figure 8. Sample of synthetically created body shapes of children for the experimental study

Using the PCA shape space of children, 165 synthetic 3D bodies of children were generated
ensuring that the space of shapes was fully represented (Figure 8). Frontal and sagittal
outlines were obtained using a projection matrix computed from typical mobile phone camera
parameters. The measurements obtained from the reconstructed bodies were compared to the
actual measurements of the synthetic models using the same digital measuring tape method.
To assess the error of our method, we used the mean differences (MD) and mean absolute
differences (MAD) as proposed by Gordon et al. (1989). MAD quantifies the accuracy while
MD provides richer information about the bias and dispersion (confidence intervals) of the
measurement errors. MD, MAD and relative MAD (MADrel) are defined as
1
𝑖 − 𝑚𝑖
𝑀𝐷 (𝑚𝑚) = 𝑛 ∑𝑖 𝑚𝑟𝑒𝑐 𝑠𝑦𝑛 ,
𝑖1 𝑖
𝑀𝐴𝐷 (𝑚𝑚) = 𝑛 ∑𝑖|𝑚𝑟𝑒𝑐 − 𝑚𝑠𝑦𝑛 |,
𝑖 −𝑚𝑖
|𝑚𝑟𝑒𝑐
1 𝑠𝑦𝑛 |
𝑀𝐴𝐷𝑟𝑒𝑙 = 𝑛 ∑𝑖 𝑖 .
𝑚𝑠𝑦𝑛

Authors version – accepted manuscript before editor’s typesetting Page 9 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Table 1 summarises the results of the MD and MAD for the 36 measurements of the 3D
reconstruction of synthetic models. The results are compared to the synthetic reconstructions
obtained by Boisvert et al. (2013) and show that the MAD is below 17 mm for all
measurements and the relative MAD is below 5%. The MD is within ±11 mm in 35 body
measurements and only exceeds this value for 7CV to wrist length (17mm). Figure 8 shows
the MD and its confidence interval (CI) at 95% for a set of nine measurements covering
different portions of the body relevant for garment design, construction and fit. The CI of the
MD for all the body measurements is significantly small at 1-3 mm (Figure 9).

Table 1. Accuracy of the measurements obtained from the 3D reconstructions of synthetic models
using the proposed method compared to results obtained by another study Boisvert et al.
(2013). MD ± half of the CI at 95%. MD and MAD in millimetres (mm)

Proposed Proposed Boisvert et al.,


Measurement Proposed MD
MAD MADrel (2013) MAD
Knee height 5±1 6 2%
Mid neck girth -7±1 9 3% 11
Chest girth -10±1 11 2% 10
Back armpits contour -9±1 10 4%
Seat girth -2±1 7 1% 11
Cervical height 5±0 5 0%
Waist girth -11±2 13 2% 22
Arm length -10±1 10 2% 15
Hip height (buttock) 4±1 7 1%
Crotch height 8±1 10 2%
Front neck height 3±1 4 0%
Neck base girth -8±1 10 3%
Head girth -11±1 13 2% 10
Chest Breadth 0±1 3 1%
Frontal armpits contour -5±1 5 2%
Bi-nipple distance 2±1 3 2%
Hip girth (buttock) -1±1 7 1% 11
Belly girth -6±3 14 2%
Distance neck-hip 1±1 5 1%
Shoulder width -8±2 12 4% 6
Shoulder length -4±1 5 5%
Neck to breast point -7±0 7 4%
Scye depth -4±1 6 5%
Back waist length -1±1 6 2%
Crotch length front 2±1 6 2%
Crotch length rear 11±2 12 4%
7CV to wrist length -17±1 17 3%
Upper arm length -7±1 7 3%
Forearm length -3±0 3 2%
Upper arm girth 1±1 3 1% 17
Wrist girth 0±0 2 2% 9
Inseam 9±1 11 2%
Outside leg length 9±1 10 1%
Thigh girth 6±1 6 2% 9
Knee girth 3±1 3 1%
Ankle girth 1±0 2 1% 14

Authors version – accepted manuscript before editor’s typesetting Page 10 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Figure 9. MD and confidence interval in mm of a set of 8 selected measurements

The synthetic and reconstructed body shapes were also compared qualitatively. Figure 10
compares the 6 synthetic models corresponding to different genders, age, body mass index
(BMI) and their reconstructions. It shows that 3D reconstructed shapes of the children are
perceptually accurate compared to the synthetic children.

Figure 10. Comparison of six synthetic models and their respective 3D reconstructions: for each pair
of bodies, the one on the left is the synthetic model and the on the right corresponds to the
3D reconstruction using the proposed method

Front view Side view Front view Side view


Y04G
Y03B

Y09G
Y10B

Y12G
Y12B

Authors version – accepted manuscript before editor’s typesetting Page 11 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

3.3 3D Reconstruction of 1:10 scale mini-figurines

This experimental study aimed to determine the precision of the full pipeline of the proposed
method when reconstructing in 3D exactly the same human shapes repeatedly in order to
remove the influence of subjects’ postural changes and breathing. In similar studies involving
adults, a single life-size manikin is typically used representing the average male or female
proportions. In our case, we aimed to evaluate the precision of the proposed method with
several synthetic shapes of children representing the boundaries of the body shape space.

Since it would have been too expensive for the study to order six life-size manikins with non-
average proportions, their digital models were generated and then manufactured using a Solid
Laser Sintering (SLS) machine from EOS as 1:10 scale mini-figurines (Figure 10). Each of
the figurines was photographed and reconstructed 10 times at a booth with a sharply
contrasting background (Figure 11). Since height is the calibration parameter, the 3D
reconstructed shapes of the children and the measurements were obtained at a 1:1 scale and
not at figurine scale. It should be noted, that in this experiment, the error of the angular
estimation of the projection was 10 times higher due to the scale factor.

Figure 11. 1:10 scale mini-figurines (left), photo booth (centre) and picture taking process (right)

To evaluate the error in measurement estimation, MAD for repeated measurements was
defined as
1 1 𝑟𝑖 −1
𝑀𝐴𝐷 (𝑚𝑚) = 𝑛 ∑𝑖 (𝑟𝑖) ∑𝑠=1 ∑𝑟𝑡=𝑠+1
𝑖
|𝑚𝑠𝑖 − 𝑚𝑡𝑖 | ,
2
where 𝑛 is the number of figurines and 𝑟𝑖 is the number of repetitions for figurine 𝑖. In this
study (10 repetitions), 𝑟𝑖 = 10 for all 𝑖.

Table 2 shows the MAD of the figurine reconstructions along with the results of precision for
our high-resolution body scanner (34 children scanned 4 times in slightly different arm
postures) and other results for similar body scanners from the literature (Dekker, 2000; Lu
and Wang, 2010; Robinette and Daanen, 2006). It shows that that absolute MAD is below 8
mm for all the measurements except for shoulder width (11mm), and that relative MAD is
below 5% for all the measurements except for scye depth (6%), shoulder length (5%),
shoulder width (4%) and bi-nipple distance (4%).

The surface-to-surface average distance per vertex for the 3D reconstructions of each
synthetic model after GPA was calculated (Figure 12). The average MAD per vertex was 2.1
mm. The highest average error per vertex was far below 10 mm and was located around the
crotch landmarks and the tip of the head. The 3D reconstructions of the 6 synthetic body
shapes were also compared visually (Figure 13), showing that 3D reconstructed bodies are
perceptually accurate compared to the synthetic children and the mini-figurines.

Authors version – accepted manuscript before editor’s typesetting Page 12 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Table 2. Precision of the proposed method for the repeated measuring of 1:10 scale figurines with
the proposed method compared to results obtained for life size physical manikins (Lu and
Wang, 2010) and subjects (Bradtmiller at al., 1999; Dekker, 2000; Lu and Wang, 2010;
Robinette and Daanen, 2006). MAD in millimetres (mm).

Lu & Lu & Robinette


Bradtmiller
Wang Wang Dekker &
Proposed Proposed & Gross,
Measurement name (2010) (2010) (2000) Daanen
MAD MADrel (1999)
manikin subjects MAD (2006)
MAD
MAD MAD MAD
Knee height 3 1% 4
Mid neck girth 5 2% 2.6
Chest girth 8 1% 2.23 6.03 6 7.14
Back armpits contour 6 2% 0.74 6.71 8.07
Seat girth 6 1% 1.42 5.47 2 4.8
Cervical height 3 0% 4
Waist girth 8 1% 2.75 5.13 3 2.87
Arm length 6 1% 5.51 6.69 5 8.6
Hip height (buttock) 6 1%
Crotch height 6 1% 12.67
Front neck height 3 0%
Neck base girth 5 2%
Head girth 8 1%
Chest Breadth 3 1% 0.61 5.20
Frontal armpits cont 5 2%
Bi-nipple distance 5 4%
Hip girth 6 1% 1.42 5.47 2 4.8
Belly girth 9 1%
Distance neck-hip 4 1%
Shoulder width 11 4% 0.82 5.73 14 6 8.07
Shoulder length 5 5%
Neck to breast point 4 2%
Scye depth 7 6%
Back waist length 6 2%
Crotch length front 5 2%
Crotch length rear 8 2% 0.77 4
7CV to wrist length 7 1%
Upper arm length 4 2%
Forearm length 4 2%
Upper arm girth 5 2%
Wrist girth 4 3%
Inseam 6 1%
Outside leg length 5 1%
Thigh girth 5 1%
Knee girth 3 1%
Ankle girth 3 2%

Authors version – accepted manuscript before editor’s typesetting Page 13 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Figure 12. Surface-to-surface average distance per vertex mapped over an average child’s shape

Figure 13 Six synthetic models and their respective 3D reconstructions


Y03B
Y04G
Y09G
Y10B
Y12G
Y12B

Authors version – accepted manuscript before editor’s typesetting Page 14 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

3.4 3D Reconstruction of real children

This experimental study aimed to determine the accuracy of the full pipeline of the proposed
method by investigating it under real use conditions. 34 children aged 3 to 12 y.o. and their
parents participated in this experimental study. The children were scanned with a high
resolution body scanner (Vitus XXL) and measured using our data-driven 3D reconstruction
app. To evaluate the measurement error we used the MD and MAD as proposed by Gordon et
al. (1989):
1 𝑖 𝑖
𝑀𝐷 (𝑚𝑚) = 𝑛 ∑𝑖 𝑚𝑟𝑒𝑐 − 𝑚𝑠𝑐𝑎𝑛 ;
𝑖 1 𝑖
𝑀𝐴𝐷 (𝑚𝑚) = 𝑛 ∑𝑖|𝑚𝑟𝑒𝑐 − 𝑚𝑠𝑐𝑎𝑛 |;
𝑖 −𝑚𝑖
|𝑚𝑟𝑒𝑐
1 𝑠𝑐𝑎𝑛 |
𝑀𝐴𝐷𝑟𝑒𝑙 = 𝑛 ∑𝑖 𝑖 .
𝑚𝑠𝑐𝑎𝑛

Table 3 shows the MD, MAD and MADrel for the 36 measurements. MADrel in real use
conditions is below 7% for all the measurements except for shoulder width (9%), shoulder
length (13%), scye depth (10%) and crotch lengths (10%). Figure 14 illustrates the MD and
the confidence interval at 95% for eight selected measurements. The MD values of these 8
primary measurements lie within ±1 cm. The 3D reconstructed body shapes were also
assessed qualitatively by comparing them with the body scans (Figure 15).

Figure14. MD and CI95 in mm for a selected set of eight measurements for the real children

Figure 15 For each pair of 3D bodies, the one on the left (golden) is the registered 3D scan of the
child and the one on the right (silver) corresponds to the 3D reconstruction using the
proposed method.

Authors version – accepted manuscript before editor’s typesetting Page 15 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Table 3. Accuracy of the proposed method with real subjects comparing between measurements made
automatically on body shape reconstructed from 2D images and measurements made automatically on
registered body scans of the subjects. Mean difference (MD) and Mean Absolute Difference (MAD)
in millimetres (mm) and Relative MAD (MADrel) expressed as a percentage.

Measurement MD1 MAD MADrel


Knee height 9±3 10 3%
Mid neck girth -7±4 11 4%
Chest girth 4±9 21 3%
Back armpits contour 5±8 20 7%
Seat girth 0±6 12 2%
Cervical height 7±4 11 1%
Waist girth -8±7 18 3%
Arm length -5±6 13 3%
Hip height (buttock) 18±4 19 3%
Crotch height 24±6 25 4%
Front neck height 8±5 13 1%
Neck base girth -3±4 11 3%
Head girth -17±8 23 4%
Chest Breadth 4±5 14 6%
Frontal armpits contour 11±7 19 7%
Bi-nipple distance -3±4 9 6%
Hip girth (buttock) -2±6 13 2%
Belly girth -3±10 22 4%
Distance neck-hip -11±4 13 3%
Shoulder width 24±8 28 9%
Shoulder length 12±4 14 13%
Neck to breast point -3±5 11 6%
Scye depth 7±5 12 10%
Back waist length 8±7 18 6%
Crotch length front -22±5 24 9%
Crotch length rear -27±5 28 10%
7CV to wrist length 4±7 19 3%
Upper arm length -9±5 12 5%
Forearm length 3±3 8 5%
Upper arm girth 0±3 8 3%
Wrist girth -6±3 8 6%
Inseam 23±6 24 4%
Outside leg length 1±6 13 2%
Thigh girth 4±7 14 4%
Knee girth -1±3 8 3%
Ankle girth 9±5 12 6%

In order to quantify indicatively the computational performance of our implementation of the


data-driven 3D reconstruction method from 2D images, the outlines of 11 of the 34 children
were randomly selected and the processing times were measured in two different servers: our
virtual test server (2.8 GHz, 2 cores and 4 threads; and 3 GB RAM) and a dedicated
professional server provided by a third party supplier (2,6 GHz, 16 cores, 32 threads; and 128
GB RAM). The average times were 150 sec in our test server and 25 sec in the professional
one.

1
Mean Difference (MD) ± half of the Confidence Interval (CI) at 95%

Authors version – accepted manuscript before editor’s typesetting Page 16 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

4 Discussion

This paper presents a new method for the 3D reconstruction of human bodies from two
photographs that can be implemented on a smartphone. Compared to other methods, our 3D
reconstruction does not require an initial calibration because the calibration can be optimised
using the sensor information of the smartphone (orientation and camera parameters).
Moreover, it is applied to the reconstruction of the 3D representations of children and
includes the modelling of the space of body shape. The method was tested with synthetic
models, mini-figurines and real children covering a wide variety of ages, body sizes, and body
shapes.

Perceptual accuracy of the 3D reconstructions is satisfactory in the three studies conducted


(Figures 10, 13 and 15), providing clearly recognisable 3D body shapes. In particular, the 3D
reconstructed bodies of real children reflected their natural pose.

The study of accuracy with synthetic children enabled us to isolate and draw conclusions on
the optimisation of the 3D reconstruction step (Figure 1). This study revealed some
limitations of the proposed method. The bias in the measurements (Table 1) is probably due to
an insufficient correspondence between the feature points in the 2D images and the landmarks
used in the mesh registration process. This bias could be corrected in order to improve the
accuracy of the results. The highest MD corresponds to the 7CV to wrist length, which is
defined over the surface shoulder and the arm. Artefacts of the learned database related to the
arm roll, elbow flexion and location of the acromion features may have affected the accuracy
of this measurement. Our accuracy study with digitally extracted silhouettes followed a
similar methodology to that used by Boisvert et al. (2013), though we used synthetic models
of children instead of actual scans of adults. Our method provided better MAD results, except
for head girth and shoulder width.

Second, our precision study followed a similar methodology to that used by Lu and Wang
(2010) but used six (child) bodies of different ages situated in the boundaries of the shape
space (Figures 11 and 13) instead of a single average (adult) manikin. We also used 1:10 scale
figurines instead of life-size models. The use of 1:10 scale figurines was possible because the
height of the photographed subject is the image-calibrating element in the 3D reconstruction
process. The measurement errors in this study (Table 2) indicate that the body area that
concentrates the lowest precision is also the shoulder region, as for the study of accuracy with
synthetic children. This tendency is also observed in the MAD values for high-resolution
body scanners, where the less precise measurements are shoulder width (13mm) and scye
depth (7mm). These results are in accordance with the study of landmark errors and accuracy
of traditional methods performed by Kouchi and Mochimaru (2011) where the acromion and
armpits were the regions that concentrated the lowest accuracy values in landmark location
and derived measurements.

We also compared our results to those of other studies of the precision of body measurements
derived from high resolution 3D scanners (Table 2), one with a life-size manikin (Lu and
Wang, 2010) and four with human subjects (Bradtmiller and Gross, 1999; Dekker et al., 2000;
Lu and Wang, 2010; Robinette and Daanen, 2006). As expected, the MAD values of the nine
comparable body measurements are slightly higher in our experiment. Nevertheless, our
results are very close to those of the reference studies with high resolution scanners, where
differences range from 1mm (cervical height, knee height, and arm length) to 5mm (waist
girth). In addition, the results would be expected to improve with life-size mannequins
because the 1:10 scale of the figurines introduces a 10 times higher camera orientation error in

Authors version – accepted manuscript before editor’s typesetting Page 17 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

relation to the orthogonality of front and side views. Due to the variability introduced by
breathing, soft tissue and posture, with real subjects, the error would be expected to increase
as it does in the study involving manikins of Lu and Wang (2010) to the four studies with
human subjects; namely, 0-2.5mm for waist girth, 4-5mm for chest girth, 0.5-4mm for hip
girth and 0-3mm for arm length.

Regarding 3D shape consistency in the figurine study (Figure 12), the regions around the
crotch and the tip of the head are were the highest error per vertex are concentrated. These
areas are more strongly affected by image artefacts due to contrast, lighting conditions and
occlusions. Furthermore, the fitting of the guiding outline is more difficult to achieve here.
The higher surface-to-surface error at the arms, which increases towards the fists (reaching
circa 6 mm), may indicate that the arm limb alignment between the 3D reconstructions is not
optimal due to slight differences in shoulder abduction/adduction. Both surface-to-surface
errors are also affected by the camera orientation error of this study.

Finally, an accuracy study was conducted with real children comparing our method with a
high-resolution 3D scanner (Table 3). The MD values of the 8 selected measurements lie
within ±1 cm, which could be a suitable range for made-to-measure, fit or sizing of wearables.
Table 3 and Figure 9 show a clear bias in the estimation of several measurements which could
be potentially corrected. This will be especially relevant for those biased measurements that
have lower accuracy such as shoulder width, shoulder length and crotch lengths. Analogously
to the other studies conducted, the shoulder region showed poorer results. This effect might be
increased in this study because the posture in the side photograph is slightly different to the
posture in the front one and in the body shape model (and thus in frontal and side projections
during optimisations) introducing a slight matching error due to differences in shoulder
posture (adduction/abduction, flexion/extension and rotation).

There is extensive literature assessing the measurement error between traditional


anthropometry and scan-derived measurements (Han et al., 2010; Lu et al., 2010; Lu and
Wang, 2010; Dekker, 2000; Bradtmiller, 1999) or measurements taken by untrained subjects
(Yoon and Radwin, 1994; Verweij et al., 2013). Since we did not assess our method against
traditional anthropometry, there are no directly comparable reference values in the literature.
The MAD values obtained in our experiment that compared 3D reconstructions against a 3D
scanner are in in the same order of magnitude as those of 3D scanners against expert
measurements (Table 3), but considerably lower than non-expert errors in all cases.

Since the calibrating parameter of the process is self-reported height, users were warned in the
prototype app that the reliability of the reconstruction depends on the accuracy of the height
value they introduced. In our experiment with children, no statistically significant differences
were found between the children’s height reported by parents and their height measured by an
expert. However, for other population groups, if the input height is biased, it should be
corrected.

The results of the running time of the 3D reconstruction algorithms (circa 25 seconds) showed
that our method is computationally efficient. Nevertheless, our implementation is not
optimised for efficiency and it still has a margin of improvement, for instance by reducing the
resolution of the template mesh used.

Authors version – accepted manuscript before editor’s typesetting Page 18 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

5 Conclusions

The data-driven 3D reconstruction of human bodies using a mobile phone app has been
proposed, implemented and demonstrated. Moreover, we determined the precision and the
accuracy (compared to a high-resolution 3D body scanner) of the prototype app.

This work has proved that it is feasible to provide realistic and perceptually accurate 3D
reconstructions of full bodies of children using a mobile phone app equipped with image
processing and 3D reconstruction software.

Despite the fact that the precision is slightly lower than that of high-end body scanners, it
can be acceptable for applications such as size recommendation, bespoke and made-to-
measure wearables (e.g. clothes, protective equipment or orthotics). Additional
experimental studies will however be necessary to determine the precision of the proposed
method with human subjects and to determine its accuracy compared with traditional
anthropometry so that we can position our method in relation to the literature (Bradtmiller
and Gross, 1999; Dekker, 2000; Han et al., 2010; Lu and Wang, 2010; Lu et al., 2010;
Paquette et al., 2000) or referred to maximum allowable errors (Table 4) for anthropometric
surveys (Gordon et al., 1989; ISO 20685:2010) or fit and sizing of apparel (Bradtmiller and
Gross, 1999).

Moreover, our solution can contribute to spreading the digitalisation of 3D bodies to any
home or point-of-sale, in particular by overcoming the barriers related to price, dedicated
space, availability and usability of the body measuring hardware.

These three aspects—the good precision of the measurements, the realistic body shape
representation and the possibility of using it at home—make the methods proposed
potentially suitable as user data input for size advice and online fit simulations of wearables
(Ballester et al., 2015a; D’Apuzzo, 2006; Gill, 2015), either as body measurements or even as
3D models. In this sense, the resulting 3D models are dense, homologous and watertight
representations of the human body which make it possible to develop interfaces to transfer the
geometry of the model efficiently and accurately to mesh topologies or models compatible
with the applications.

Nevertheless, a further experimental study with a sample of additional population groups


including children and adults from different regions of the world would be necessary to
demonstrate the potential for the international application of this technology.

Acknowledgements
The authors would like to thank their colleagues Begoña Mateo, Juan Carlos González, Silvia San Jerónimo and
María Sancho for their participation in proposal writing, technology implementation and conduction of the user
testing.

Authors version – accepted manuscript before editor’s typesetting Page 19 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Table 4. MAE for anthropometric studies with traditional methods established by ANSUR
(Gordon et al., 1989), MAE between measurements extracted from 3D scans and
traditionally measured values (ISO 20685:2010), and MAE estimated by tailors for fit
and fashion applications (Bradtmiller and Gross, 1999). MAD and MAEvalues in mm

MAE (Gordon et MAE (ISO MAE (Bradtmiller


Measurement
al., 1989) 20685:2010) and Gross, 1999)
Knee height 6 4
Mid neck girth 6 4 6.4
Chest girth 15 9 12.7
Back armpits contour 10 5
Seat girth 12 9 12.7
Cervical height 7 4
Waist girth 11 9 12.7
Arm length 6 5 12.7
Hip height (buttock) 7 4
Crotch height 10 4
Front neck height 5 4
Neck base girth 11 4
Head girth 5 4
Chest Breadth 8 5
Frontal armpits contour 10 5
Bi-nipple distance 10 5
Hip girth (buttock) 12 9 12.7
Belly girth 12 9
Distance neck-hip 14 5
Shoulder width 8 5
Shoulder length 4 5
Neck to breast point 8 5
Scye depth 4 5
Back waist length 5 5
Crotch length front 27 5
Crotch length rear 11 5
7CV to wrist length 10 5
Upper arm length 4 5
Forearm length 6 5
Upper arm girth 8 4
Wrist girth 5 4
Inseam 10 5 12.7
Outside leg length 13 5
Thigh girth 6 4
Knee girth 4 4
Ankle girth 4 4

References

3DMDbody. [Online] http://www.3dmd.com/3dmd-systems/#body (accessed 28 June 2016)


Alemany, S., González, J. C., Nácher, B. Soriano, C., Arnáiz, C. and Heras, A., (2010)
‘Anthropometric Survey of the Spanish Female Population Aimed at the Apparel Industry’ in
Proc. of 1st International Conference on 3D Body Scanning Technologies, Lugano, Switzerland.
Alemany, S.; Ballester; A., Parrilla; E., Uriel, J.; González, J.; Nácher, B.; González, J.C.; Page, ‘A.
Exploitation of 3D body databases to improve size selection on the apparel industry’, in Proc. of

Authors version – accepted manuscript before editor’s typesetting Page 20 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

4th International Conference on 3D Body Scanning Technologies, Long Beach, CA, USA,
November 2013.
Allen, B., Curless, B., and Popović, Z. (2003) ‘The space of human body shapes: reconstruction and
parameterization from range scans’ in ACM transactions on graphics, Vol. 22, No. 3, pp. 587-
594.
Amberg, B., Romdhani, S., and Vetter, T. (2007): “Optimal step nonrigid icp algorithms for surface
registration”, in Computer Vision and Pattern Recognition, 2007, IEEE Conference.
Ballester, A., Parrilla, E., Vivas, J. A., Piérola, A., Uriel, J., Puigcerver, S. A., Piqueras, P., Solves-
Camallonga, C., Rodríguez, M., González, J. C., and Alemany S. (2015a) ‘Low-Cost Data-Driven
3D Reconstruction and its Applications’, in 6th International Conference on 3D Body Scanning
Technologies, Hometrica Consulting, Lugano, Switzerland.
Ballester, A., Valero, M., Nácher, B., Piérola, A., Piqueras, P., Sancho, M., Gargallo, G., González, J.
C., and Alemany S. (2015b), ‘3D Body Databases of the Spanish Population and its Application
to the Apparel Industry’ in 6th International Conference on 3D Body Scanning Technologies,
Hometrica Consulting, Lugano, Switzerland.
Blanz, V., and Vetter, T. (1999) ‘A morphable model for the synthesis of 3D faces’ in SIGGRAPH 99:
Proceedings of the 26th annual conference on Computer graphics and interactive techniques, Los
Angeles, CA, USA, pp. 187-194.
Boisvert, J., Shu, C., Wuhrer, S., and Xi, P. (2013) ‘Three-dimensional human shape inference from
silhouettes: Reconstruction and validation’, Machine vision and applications, Vol. 24 No. 1, pp.
145-157.
Bong, Y. B., Merican, A. F., Azhar, S., Mokhtari, T., Mohamed A. M., Shariff A. A. (2014) Three-
Dimensional (3D) Anthropometry Study of the Malaysian Population’, in 5th International
Conference on 3D Body Scanning Technologies, Hometrica Consulting, Lugano, Switzerland.
Bookstein, F.L. (1997) Morphometric Tools for Landmark Data: Geometry and Biology, Cambridge
University Press, Cambridge, UK.
Botspot by Botspot GmbH, [online] http://www.botspot.de/ (accessed 15 December 2016)
Bougourd, J. (2005) ‘Measuring and shaping a nation: SizeUK’, in Int Conf on Recent Advances in
Innovation and Enterprise in Textiles and Clothing, Marmaris University, Istanbul, Turkey.
Bradtmiller, B., & Gross, M. E. (1999). 3D whole body scans: measurement extraction software
validation (No. 1999-01-1892). SAE Technical Paper.
Charoensiriwath S. and Tanprasert C. (2010) ‘An Overview of 3D Body Scanning Applications in
Thailand’, 1st International Conference on 3D Body Scanning Technologies, Lugano, Switzerland
Chen, Y., and Cipolla, R. (2009) ‘Learning shape priors for single view reconstruction’ In Computer
Vision Workshops, IEEE 12th International Conference, pp. 1425-1432.
Cools, J., de Raeve, A., and Bossaer, H. (2014) ‘The use of 3D anthropometric data for morphotype
analysis to improve fit and grading techniques’, in 5th International Conference on 3D Body
Scanning Technologies, Hometrica Consulting, Lugano, Switzerland.
D’Apuzzo, N. (2006) ‘Overview of 3D surface digitization technologies in Europe’. In Proceedings
SPIE, Vol. 6056, No. 605605, pp. 1-13.
Daanen, H. A., and Ter Haar, F. B. (2013) ‘3D whole body scanners revisited’, Displays, Vol. 34, No.
4, pp. 270-275.
Daanen, H. M., and van de Water, G. J. (1998) ‘Whole body scanners’, Displays, Vol. 19, No. 3, pp.
111-120.
Dekker, L. D. (2000) ‘3D human body modelling from range data’ PhD thesis, Doctoral dissertation,
University of London, London, United Kingdom.

Authors version – accepted manuscript before editor’s typesetting Page 21 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Dekker, L., Douros, I., Buston, B. F., and Treleaven, P. (1999). Building symbolic information for 3D
human body modeling from range data. In 3-D Digital Imaging and Modeling, 1999.
Proceedings. Second International Conference on (pp. 388-397). IEEE.
DITUS MC from Human Solutions GmbH. [online] http://www.human-solutions.com/ (accessed 28
June 2016)
Fit3D. [Online] http://www.fit3d.com/
Geladi, P., and Kowalski, B. R. (1986). Partial least-squares regression: a tutorial. Analytica chimica
acta, 185, 1-17.
Gill, S. (2015) ‘A review of research and innovation in garment sizing, prototyping and fitting’,
Textile Progress, 47:1, 1-85, DOI: 10.1080/00405167.2015.1023512
Gordon, C. C., Bradtmiller, B., Churchill, T., Clauser, C. E., McConville, J. T., Tebbetts, I. O., and
Walker, R. A. (1989). ‘1988 Anthropometric Survey of US Army Personnel: Methods and
Summary Statistics’, Natick, MA: US Army Natick Research. Development and Engineering
Center.
Gordon C. C., Blackwell C. L., Bradtmiller B., Parham J. L., Hotzman J., Paquette S. P., Corner B. D.,
Hodge B. M. (2011) ‘2010 Anthropometric Survey of Marine Corps Personnel: Methods and
Summary Statistics’ NATICK/TR-11/017. Natick, MA: U.S. Army Natick Research,
Development, and Engineering Center.
Gordon C. C, Blackwell C. L, Bradtmiller B., Parham J. L., Barrientos P., Paquette S. P., Corner B.
D., Carson J. M., Venezia J. C., Rockwell, B. M., Muncher M., and Kristensen S. (2015) ‘2010-
2012 Anthropometric Survey of US Army Personnel: Methods and Summary Statistics’,
NATICK/TR-15/007. Natick, MA: U.S. Army Natick Research, Development, and Engineering
Center.
Gower, J. C. (1975). Generalized procrustes analysis. Psychometrika, 40(1), 33-51.
Guan, P., Weiss, A., Balan, O. and Black M. J. (2009) ‘Estimating human shape and pose from a
single image’, in International Conference on Computer Vision.
Han, H., Nam, Y. and Choi, K. (2010) ‘Comparative analysis of 3D body scan measurements and
manual measurements of size Korea adult females’, International Journal of Industrial
Ergonomics, Vol. 40, No. 5, pp.530–540.
Hasler, N., Ackermann, H., Rosenhahn, B., Thormahlen, T., and Seidel H.P. (2010) ‘Multilinear pose
and body shape estimation of dressed subjects from image sets’, In Conference on Computer
Vision and Pattern Recognition, San Francisco, CA, USA.
Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B. and Seidel H.P. (2009) ‘A statistical model of human
pose and body shape’ In P. Dutré and M. Stamminger, editors, Computer Graphics Forum,
volume 2.
IIIDbody from 4DDynamics. [online] http://www.4ddynamics.com/ (accessed 28 June 2016)
Intellifit from Intellifit pss, [online] http://intellifitpss.com/ (accessed 15 December 2016)
International Organisation for Standardisation (2008) ISO 7250-1:2008 “Basic human body
measurements for technological design” - Part 1: Body measurement definitions and landmarks.
International Organisation for Standardisation (1989) ISO 8559:1989 Garment construction and
anthropometric surveys-Body dimensions.
International Organisation for Standardisation (2010) ISO 20685:2010 3-D scanning methodologies
for internationally compatible anthropometric databases
Istook, C. L. (2008) ‘Three-dimensional body scanning to improve fit, in Advances in Apparel
Production’, C. Fairhurst, ed.,Woodhead Publishing, Cambridge, 2008.

Authors version – accepted manuscript before editor’s typesetting Page 22 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Kinect from Microsoft. [online] https://developer.microsoft.com/en-us/windows/kinect (accessed 28


June 2016)
Kouchi, M. (2014). Anthropometric methods for apparel design: body measurement devices and
techniques. En Anthropometry, Apparel Sizing and Design (pp. 67-94). Elsevier.
Kouchi, M., and Mochimaru, M. (2011) ‘Errors in landmarking and the evaluation of the accuracy of
traditional and 3D anthropometry’, Applied ergonomics, Vol. 42, no. 3, pp. 518-527.
Kouchi , M. and Mochimaru , M. ( 2005 ), ‘ Causes of the measurement errors in body dimensions
derived from 3D body scanners: differences in measurement posture ’,Anthropological Science
(Japanese Series), 113 : 63 – 75.
Kulkarni, D., Ranjan, S., Chitodkar, V., Gurjar, V., Ghaisas, C. V., and Mannikar, A. V. (2011) ‘SIZE
INDIA-Anthropometric Size Measurement of Indian Driving Population’ SAE Technical Paper
No. 2011-26-0108.
Lin, J. D., Chiou, W. K., Weng, H. F., Tsai, Y. H., & Liu, T. H. (2002). Comparison of three-
dimensional anthropometric body surface scanning to waist–hip ratio and body mass index in
correlation with metabolic risk factors. Journal of clinical epidemiology, 55(8), 757-766.
Lin, Y. L., and Wang, M. J. J. (2011). Automated body feature extraction from 2D images. Expert
Systems with Applications, 38(3), 2585-2591.
Lin, Y. L., and Wang, M. J. J. (2012). Constructing 3D human model from front and side images.
Expert Systems with Applications, 39(5), 5012-5018.
Lu, J. M., and Wang, M. J. J. (2010). The evaluation of scan-derived anthropometric
measurements. Instrumentation and Measurement, IEEE Transactions on, 59(8), 2048-2054.
Lu, J. M., Wang, M. J. J., and Mollard, R. (2010). The effect of arm posture on the scan-derived
measurements. Applied ergonomics, 41(2), 236-241.
Paquette, S., Brantley, J. D., Corner, B. D., Li, P., and Oliver, T. (2000). Automated extraction of
anthropometric data from 3D images. In Proceedings of the Human Factors and Ergonomics
Society Annual Meeting (Vol. 6, p. 727). Human Factors and Ergonomics Society.
Parrilla, E., Ballester, A., Solves-Camallonga, C., Nácher, B., Puigcerver, S.A., Uriel, J., Piérola, A.,
González, J.C. and Alemany, S. (2015). Low-cost 3D foot scanner using a mobile app. Footwear
Science, Vol. 7, Iss. sup1, 2015.
RealSense from Intel. [online] http://www.intel.es/content/www/es/es/architecture-and-
technology/realsense-overview.html (accessed 28 June 2016)
Robinette, K. M., and Daanen, H. A. (2006) ‘Precision of the CAESAR scan-extracted
measurements’, Applied Ergonomics, Vol. 37, No. 3, pp. 259-265.
Robinette, K. M., Daanen, H.M. and Paquet E. (1999) ‘The CAESAR project: a 3-D surface
anthropometry survey’, in 3-D Digital Imaging and Modeling, Proceedings. Second International
Conference on. IEEE, 1999.
Rother, C., Kolmogorov, V., and Blake, A. (2004) Grabcut: Interactive foreground extraction using
iterated graph cuts. In ACM transactions on graphics, Vol. 23, No. 3, pp. 309-314.
Saito, S., Kochib, M., Mochimarub, M., and Aokia, Y. (2011) ‘Body Trunk Shape Estimation from
Silhouettes by Using Homologous Human Body Model’, In Proceedings of the 2nd International
Conference on 3D Body Scanning Technologies, Lugano, Switzerland.
Seidl, A., Trieb, R., and Wirsching, H. (2009) ‘SizeGERMANY–the new German anthropometric
survey’ Conceptual design, implementation and results. In Proceedings of the 17th World
Congress on Ergonomics.

Authors version – accepted manuscript before editor’s typesetting Page 23 of 24


Published in the International Journal of the Digital Human on 31/May/2017, https://doi.org/10.1504/IJDH.2016.084581

Seo, H., and Magnenat-Thalmann, N. (2003) ‘An automatic modeling of human bodies from sizing
parameters’ in Proceedings of the 2003 Symposium on Interactive 3D Graphics, pp 19–26,
Monterey, CA, USA.
Seo, H., Yeo, Y. I., & Wohn, K. (2006). 3D body reconstruction from photos based on range scan. In
Technologies for e-learning and digital entertainment (pp. 849-860). Springer Berlin Heidelberg.
Shapify from ARTEC. [online] https://www.artec3d.com/es/hardware/shapifybooth (accessed 28 June
2016)
Shu, C., Xi, P., & Keefe, A. (2015) ‘Data processing and analysis for the 2012 Canadian Forces 3D
anthropometric survey’, Procedia Manufacturing, 3, 3745-3752.
SizeStream. [online] http://www.sizestream.com/ (accessed 28 June 2016)
Structure sensor for iPad. [online] http://structure.io/ (accessed 28 June 2016)
Styku. [online] http://www.styku.com/bodyscanner (accessed 28 June 2016)
Sumner, R. and Popovic, J., (2004) ‘Deformation Transfer for Triangle Meshes’, SIGGRAPH.
Symcad from Telmat Indutries. [online] http://www.telmat.com/activites_vision.php (accessed 28 June
2016)
TC2-19B from [TC]² Labs. [online] http://www.tc2.com/tc2-19b-3d-body-scanner.html (accessed 28
June 2016)
TC2-19R from [TC]² Labs. [online] http://www.tc2.com/tc2-19r-mobile-scanner.html (accessed 28
June 2016)
Texel from Texel Inc. [online] http://texel.graphics/ (accessed 15 December 2016)
Treleaven, P., & Wells, J. C. K. (2007). 3D body scanning and healthcare applications. Computer,
40(7), 28-34.
Verweij, L.M., Terwee, C.B., Proper, K.I., Hulshof, C.T. and van Mechelen, W. (2013) ‘Measurement
error of waist circumference: gaps in knowledge’, Public health nutrition, Vol. 16, No. 02,
pp.281–288.
VITUS bodyscan from Human Solutions. [online] http://www.human-
solutions.com/fashion/front_content.php?idcat=813&lang=7 (accessed 28 June 2016)

Wang, J., Gallagher, D., Thornton, J. C., Yu, W., Horlick, M., & Pi-Sunyer, F. X. (2006). Validation
of a 3-dimensional photonic scanner for the measurement of body volumes, dimensions, and
percentage body fat. The American journal of clinical nutrition, 83(4), 809-816.

Wold, H. (1985). Partial least squares. Encyclopedia of statistical sciences.


Wuhrer, S., and Shu, C. (2012) ’Estimating 3D human shapes from measurements’ Machine vision
and applications, Vol. 24, No. 6, pp. 1133-1147.
Xtion from Asus. [online] https://www.asus.com/3D-Sensor/Xtion/ (accessed 28 June 2016)
Yoon, J.C., and Robert G. R. (1994) ‘The accuracy of consumer-made body measurements for
women's mail-order clothing’. Human Factors: The Journal of the Human Factors and
Ergonomics Society, Vol. 3, No. 3, pp. 557-568.
Zhu, C., Byrd, R. H., Lu, P., and Nocedal, J. (1997). Algorithm 778: L-BFGS-B: Fortran subroutines
for large-scale bound-constrained optimization. ACM Transactions on Mathematical Software
Vol. 23, No. 4, pp. 550-560.

Authors version – accepted manuscript before editor’s typesetting Page 24 of 24

You might also like