You are on page 1of 17

children

Review
Facial Scanning Accuracy with Stereophotogrammetry and
Smartphone Technology in Children: A Systematic Review
Vincenzo Quinzi 1 , Alessandro Polizzi 1,2, * , Vincenzo Ronsivalle 2 , Simona Santonocito 2 ,
Cristina Conforte 2 , Rebecca Jewel Manenti 1 , Gaetano Isola 2, * and Antonino Lo Giudice 2

1 Department of Life, Health & Environmental Sciences, Postgraduate School of Orthodontics,


University of L’Aquila, 67100 L’Aquila, Italy
2 Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania,
95124 Catania, Italy
* Correspondence: alexpoli345@gmail.com (A.P.); gaetano.isola@unict.it (G.I.); Tel.: +39-095-378-2698 (G.I.)

Abstract: The aim of the study was to systematically review and compare the accuracy of smartphone
scanners versus stereophotogrammetry technology for facial digitization in children. A systematic
literature search strategy of articles published from 1 January 2010 to 30 August 2022 was adopted
through a combination of Mesh terms and free text words pooled through boolean operators on the
following databases: PubMed, Scopus, Web of Science, Cochrane Library, LILACS, and OpenGrey.
Twenty-three articles met the inclusion criteria. Stationary stereophotogrammetry devices showed a
mean accuracy that ranged from 0.087 to 0.860 mm, portable stereophotogrammetry scanners from
0.150 to 0.849 mm, and smartphones from 0.460 to 1.400 mm. Regarding the risk of bias assessment,
fourteen papers showed an overall low risk, three articles had unclear risk and four articles had
high risk. Although smartphones showed less performance on deep and irregular surfaces, all the
analyzed devices were sufficiently accurate for clinical application. Internal depth-sensing cameras or
Citation: Quinzi, V.; Polizzi, A.; external infrared structured-light depth-sensing cameras plugged into smartphones/tablets increased
Ronsivalle, V.; Santonocito, S.; the accuracy. These devices are portable and inexpensive but require greater operator experience and
Conforte, C.; Manenti, R.J.; Isola, G.;
patient compliance for the incremented time of acquisition. Stationary stereophotogrammetry is the
Lo Giudice, A. Facial Scanning
gold standard for greater accuracy and shorter acquisition time, avoiding motion artifacts.
Accuracy with Stereophotogrammetry
and Smartphone Technology in
Keywords: accuracy; 3D facial scanning; smartphones; stereophotogrammetry
Children: A Systematic Review.
Children 2022, 9, 1390. https://
doi.org/10.3390/children9091390

Academic Editor: Guido Galbiati 1. Introduction


Received: 29 July 2022 Facial acquisition and 3D imaging are useful in many fields of medicine such as
Accepted: 12 September 2022 maxillo-facial surgery, the production of prostheses, forensic medicine, and orthodon-
Published: 14 September 2022 tics [1–3]. Precise acquisition of 3D facial scanning incorporated with dental design software
Publisher’s Note: MDPI stays neutral
may improve treatment planning predictability [1,4–6]. Facial anthropometry, which made
with regard to jurisdictional claims in
use of calipers and protractors to measure indices, was traditionally employed for facial
published maps and institutional affil- acquisition [7]. However facial anthropometry, despite its simplicity, is operator-dependent
iations. and time-consuming, potentially inducing discomfort to the patients [8].
2D digital photography may be useful during dental treatment planning, for example,
to visualize some measurements used to communicate with the patients. However, it is not
possible to simulate a 3D object such as the human face; therefore, 2D photography is not
Copyright: © 2022 by the authors. suitable for the detection of facial volumes, deformities, and asymmetries [4,8,9].
Licensee MDPI, Basel, Switzerland. Modern technologies overcame the limitations of direct anthropometry and 2D pho-
This article is an open access article tography. Recently, 3D optical facial scanners have been commercialized with the aim of the
distributed under the terms and acquisition of 3D patient facial images [10]. A facial scanner may provide a reliable method
conditions of the Creative Commons of facial digitization in order to create a virtual patient for treatment planning. The different
Attribution (CC BY) license (https://
3D facial scanning technologies can be classified as follows: photogrammetry, stereopho-
creativecommons.org/licenses/by/
togrammetry [11–16], laser beam scanning [9,17,18], structured light scanning [19,20], and
4.0/).

Children 2022, 9, 1390. https://doi.org/10.3390/children9091390 https://www.mdpi.com/journal/children


Children 2022, 9, 1390 2 of 17

dual structured light with infra-red sensor. Photogrammetry and stereophotogrammetry


are passive scanning systems consisting of taking two or more photos from different per-
spectives; the 3D point cloud is then obtained from the common points through a reverse
engineering software program. On the contrary, laser beam scanning and structured light
scanning are active scanning systems; more specifically, light patterns projected at the
patient are detected by a camera to obtain the 3D point cloud [19–23].
Stereophotogrammetry has been one of the most employed face scanning acquisition
systems [24], which is fast (rapid capture of the face’s shape and texture), non-invasive, and
produces accurate 3D images [4,25,26]. However, most professional stereophotogrammetry
scanners are complex and expensive, and often require a long learning curve to optimally
perform the scanning protocols [27–30]. Recently, interest in the use of smartphones with
3D depth sensor cameras that use structured light or time of flight technology for face
scanning is spreading. These devices have the advantage of being portable, inexpensive,
and popular [31–34]. Other smartphone scanning advantages include reduced time con-
sumption for scanning image processing, and technical learning [35]. Different smartphone
applications have been created for face digitalization in order to obtain 3D facial models
transferable to dental computer-aided design (CAD) software [31,33].
Facial scanning is more challenging in children for different reasons. First of all, they
may be uncooperative patients. Facial scanning, to be optimal, requires the subject to remain
still for the entire time of acquisition and this may not be easy with a child. Furthermore, in
the case of syndromic patients, some current technologies may have greater difficulties in
the specific acquisition of any facial deformities. All these factors could affect the accuracy
of the facial scan and therefore it is necessary to create new technologies that may overcome
these difficulties and make the acquisition time less but with greater diagnostic accuracy.
In this regard, close-range photogrammetry integrated with machine learning has been
proposed and could be promising to create 3D children’s facial models [36].
The accuracy of these systems is still under evaluation, therefore, the aim of this study
is to systematically review and compare, in the light of current knowledge, the accuracy of
smartphone scanners versus stereophotogrammetry technology for facial digitization.

2. Materials and Methods


2.1. Protocol, Registration and Search Strategy
This systematic review followed the Preferred Reporting Items for Systematic Reviews
and Meta-analyses (PRISMA) guidelines [37] and it has also been registered on the PROS-
PERO database (registration number: CRD42021241229). The population, intervention,
comparison, and outcomes (PICO) question has been formulated as follows: are extra-
oral scannings for the reproduction of a virtual facial model (P) obtained by smartphone
technology (I) comparable to those obtained by stereophotogrammetry (C) in terms of
accuracy (O)? The search strategy involved a combination of Mesh terms and free text
words pooled through boolean operators (‘AND’, ‘OR’) and it has been performed on the
following databases: PUBMED, Scopus, Web of Science, Cochrane Library, LILACS, and
OpenGrey (Table 1).

2.2. Inclusion, Exclusion Criteria and Outcomes


The studies included in this systematic review were: randomized and non-randomized
controlled trials, cohort studies, case-control studies, cross-sectional studies, retrospective
studies, and thesis, only in English language and for a publication period from 1 January
2010 to 30 August 2022. Case reports, opinion articles, and reviews were excluded. The
population of interest involved human faces or objects reproducing the human face; in
particular, 3D virtual facial models obtained from optical facial scanners based on stereopho-
togrammetry or smartphone technology, but the studies with the use of 2D model images
only were excluded. The main outcome of this systematic review consisted of the accuracy
of facial measurements obtained by stereophotogrammetry versus smartphone applica-
tions scannings. Accuracy is intended as the discrepancy between the virtual facial model
Children 2022, 9, 1390 3 of 17

obtained through the scanner and a reference model. The deviation was measured in terms
of inter-landmarks linear distances or surface-to-surface deviations.

Table 1. Searching strategy in the different databases.

Database Boolean Operator Results


(“virtual patient” OR “virtual face” OR “digital facial models” OR
“3D virtual facial model”) AND (“scanner” [MeSH Terms] OR
“digital face scan” OR “facial scan” OR “3D face scanning” OR
Pubmed “facial digitalization” OR “stereophotogrammetry” OR 244
“smartphone scanner” OR “smartphone face scanning” OR
“smartphone application”) AND (“accuracy” OR “trueness” OR
“precision” OR “3D comparison”).
(“virtual patient” OR “virtual face” OR “digital facial models” OR
“3D virtual facial model”) AND (“scanner” OR “digital face scan”
OR “facial scan” OR “3D face scanning” OR “facial digitalization”
Scopus 215
OR “stereophotogrammetry” OR “smartphone scanner” OR
“smartphone face scanning” OR “smartphone application”) AND
(“accuracy” OR “trueness” OR “precision” OR “3D comparison”).
TS = ((virtual face OR digital facial models OR 3D virtual facial
model OR digital face OR face) AND (scanner OR digital face scan
Web of OR facial scan OR 3D face scanning OR facial digitalization OR face
320
Science digitalization OR stereophotogrammetry OR smartphone scanner
OR smartphone face scanning OR smartphone application) AND
(accuracy OR trueness OR precision OR 3D comparison)).
“virtual patient” OR “virtual face” OR “digital facial models” OR
“3D virtual facial model” AND “scanner” OR “digital face scan”
Cochrane OR “facial scan” OR “3D face scanning” OR “facial digitalization”
102
Library OR “stereophotogrammetry” OR “smartphone scanner” OR
“smartphone face scanning” OR “smartphone application” AND
“accuracy” OR “trueness” OR “precision” OR “3D comparison”.
(digital facial models) AND (scanner OR stereophotogrammetry
LILACS 25
OR smartphone face scanning) AND (accuracy).
(virtual face OR digital facial models OR 3D virtual facial model
OR digital face OR face) AND (scanner OR digital face scan OR
facial scan OR 3D face scanning OR facial digitalization OR face
OpenGrey 2
digitalization OR stereophotogrammetry OR smartphone scanner
OR smartphone face scanning OR smartphone application) AND
(accuracy OR trueness OR precision OR 3D comparison).
Total 908

2.3. Study Selection and Data Extraction


Records identified through database searches underwent an initial screening (title and
abstract evaluation) where potential relevant articles were selected. The eligible articles
underwent a further review in the full-text versions for adherence to the inclusion criteria.
The data extracted from the studies included in the study are: authors, year of publication,
aim of the study, characteristics of the sample and of the scanning method, standard
reference for validation, characteristics of the measurements carried out, and conclusions.
Disagreements between the review authors over the selection of particular articles were
resolved by discussion, with involvement of a third review author, where necessary.

2.4. Risk of Bias Assessment


Two reviewers independently assessed the risk of bias in included studies using
the Quality Assessment Tool for Diagnostic Accuracy Studies-2 (QUADAS-2) [38], which
comprises 4 domains: patient selection, index test, reference standard, and flow and
timing. When 1 or more of the key domains are scored as high risk, the study in question
Children 2022, 9, 1390 4 of 17

is considered with a high risk of bias. When more than 2 key domains are judged as
unclear, the study is regarded with an unclear risk of bias. Disagreements between the
review authors over the risk of bias in particular articles were resolved by discussion, with
involvement of a third review author where necessary.

3. Results
3.1. Study Selection
The database searching led to the identification of 908 articles: 244 from Pubmed, 215
from Scopus, 320 from Web of Science, 102 from Cochrane Library, 25 from LILACS, and 2
from OpenGrey (Table 1). 75 duplicates were removed through the title review and, after
the abstract screening of the 833 remaining articles, 43 full-text studies were assessed for
eligibility. 23 articles were considered as eligible for the review analysis, whereas the other
20 papers were excluded for: no clear assessment of accuracy, no stereophotogrammetry or
smartphone
Children 2022, 9, x FOR PEER REVIEWscanner, no statistical analysis, no English full-text, and no facial measurements. 5 of 2
The PRISMA flow diagram of the search and evaluation process is illustrated in Figure 1.

Figure
Figure 1. PRISMA flow1.diagram
PRISMA of
flow
thediagram of the
searching searching
strategy strategy and results.
and results.

3.2. Risk of Bias3.2.


andRisk of Bias andConcern
Applicability Applicability Concern
Table 2 showsTable 2 shows
the risk theand
of bias riskapplication
of bias and application concerns according
concerns according to the QUADAS-
to the QUADAS-
guidelines. Among 23 articles [39–61], 14 papers showed
2 guidelines. Among 23 articles [39–61], 14 papers showed an overall low risk an overall low of
risk of bia
bias [40,41,43,45–51,53,56,59–61], 3 articles had unclear risk of bias [39,44,52,57] and 4 articles 4 article
[40,41,43,45–51,53,56,59–61], 3 articles had unclear risk of bias [39,44,52,57] and
displayed a high displayed
risk ofabias
high[42,54,55,58].
risk of bias [42,54,55,58].
Concerning Concerning applicability
applicability concerns,
concerns, all articlesall article
showed an overall low level of concern, however, the domain “index test” showed mor
frequently an unclear concern because some papers did not sufficiently describe th
scanning procedures and the used devices [40,46,55]. Regarding the quality assessmen
the domain “patient selection” showed the highest risk of bias because of the exclusion o
maxillofacial abnormalities [42,45,54,55,58] (this may result in overestimation o
Children 2022, 9, x FOR PEER REVIEW

Table 2. Risk of bias


Children 2022, 9, 1390 5 of 17
risk, Unclear

Study
Risk of Bias
Patient Reference
showed an overall low level of concern, however, the domain “index selection test” showed Index test
moreStandard
Children 2022, Children
9, x FOR 2022, PEER
9, x FOR frequently
REVIEW PEER REVIEW an Children
unclear2022, concern9, x FOR because
PEER REVIEW some papers did notAkan sufficiently
et al. describe the scan- 6 of 20 6
ning procedures and the used devices [40,46,55]. Regarding (2021)
the[39]quality assessment, the
domain “patient selection” showed the highest risk of bias Amornvit
because et of the exclusion of
al. (2019)
maxillofacial abnormalities [42,45,54,55,58] (this may result in overestimation of diagnostic
[40]
accuracy Table[38]),2. Risk
Tabletheof small
2.
biasRisk and of application
number bias and application
concerns
of participants Table according
concerns
2. Risk according
oftobias
[40,41,46,51,52] the QUADAS-2
and toapplication
and the QUADAS-2
guidelines.
concerns guidelines.
according
High to
Aswehlee et the unclear method
of randomrisk,REVIEW sampling
risk,Unclear[40,51,57].
Unclear
risk, Weighted
risk,Low risk.risk, bar
Lowchartrisk. for the risk
Unclear of bias and
al.risk, Lowapplication
risk.
Children 2022,2022,
Children 9, x FOR PEERPEER
9, x FOR REVIEW 7 of 720of 20
concerns of the selected studies is shown in Figure 2. (2018) [41]
Study Study Risk of Study
Bias
Risk of Bias Risk of Bias Applicability
Ayaz et al.ApplicabilityConcernsConcerns A
Patient Patient Table 2. Risk of bias and
ReferenceReference application
FlowPatient concerns
and FlowOverall and according
Overall to
Patient the
Reference QUADAS-2
(2020)
Patient [42] guidelines.
Flow andReference Overall High
Reference
Overall
PatientOver
Rudy Rudyet al.et al.Index testIndex test Index test Indexettest
Chong al. Index test
selection
(2020) selection
[57][57]
(2020) risk, Unclear Standard
risk, Standard
LowTiming selection
risk. Timing
risk of bias
risk of
selection
bias
Standard selection Timing Standard
risk of Standard
risk
bias of bias
risk
selection of
(2021) [43]
Akan et al. AkanChildren
WangetWang
al.
2022,
etChildren
et9,al.
al. x FOR2022,
Children
PEER
9, x FOR
REVIEW
2022, Akan
PEER
Children
9, x FOR
Children et
REVIEW
2022, al.9,Children
2022,
PEER
Children9,x xREVIEW
FOR
FOR
2022, 2022,
PEER
Children
PEER 9,
9, x FOR REVIEW
x FOR
REVIEW
2022, 9, xPEER
PEER
Children
FOR
REVIEW REVIEW
2022,
PEER9,REVIEW
x FOR PEER REVIEW
D’Ettorre
(2021) Study
[39] (2021) (2022) [58][58]
(2022)
[39] (2021) Risk[39] of Bias Applicability
(2022) et al. Concerns
Children 2022,Children Children
9, x FOR 2022, PEER
Ye 2022,
Children
9,etx al.
Ye FOR
REVIEW9,PEER
et 2022,
al. x FOR PEER
Children
9, xREVIEW
FOR2022, REVIEW
PEER Children
9,REVIEW
x FOR2022,PEER9,REVIEWx FOR Children
PEER REVIEW2022, Children Children
9, x FOR 2022,
PEER 2022, 9, x FOR
9,REVIEW
x FOR PEER PEER
REVIEWREVIEW 7 of 20 7 of 2
Amornvit Amornvit
et et Patient IndexAmornvit Reference et Flow and Overall Patient [44]
Index Reference Overall
(2016) [59][59] selection
(2016) test Standard Timing risk of bias selection Dindaroglu test Standard risk of bias
Childrenal.2022,
(2019) 9, xal.
Children FOR (2019)
Rudy
Children
2022, PEER et
2022,
Children
Zhao 9, al.
Rudy
xetFOR
Zhao REVIEW9,al.
x FOR
al.et2022,
PEER et al.
Rudy
PEER
Children
9, xREVIEW
FOR2022, et al.
PEER xal.
Rudy
9,Rudy
REVIEW
Children
REVIEW
FOR(2019)
etet
2022,
PEER al.
al.
Rudy
Children Rudy
9,REVIEW
x FOR et
2022, et9,al.
al.
Rudy
Children
PEER et
x FOR
REVIEW
2022,al.9,Rudy
PEER
Children
x FOR et
REVIEW
2022,al.9,REVIEW
PEER x FOR PEER REVIEW 7 of 20 7 of 2
Akan et al. (2021) [39] et al. (2016)
[40] [40]
(2017)
(2020) [57] [60]
(2017) [60]
(2020) [57] (2020) [57](2020) [40]
(2020)[57] [57]
(2020) (2020) [57] [57] [57] (2020) [57]
(2020) [45]
Rudy et al.
Aswehlee Rudy Rudy
et Zhao
al. etet
Rudy
Children
Zhao al.
al.etetal.al.9,Rudy
2022, x FORet
Children al.Children
2022,
PEER Rudy
9, x FOR
REVIEW et
2022,al.
PEER9, x REVIEW
Children FOR Rudy
2022,
Children
PEER
9, x FORet
REVIEW
2022,al.9,Rudy
PEER
ChildrenRudy
x FOR et
REVIEW
2022,
PEER et9,REVIEW
al. al.
x FOR PEER REVIEW
Amornvit etAswehlee
etWang
al. (2019) [40] etetWang
al. et Wang
al. Aswehlee
et al. WangetetWang
Wang et
al.
al. Wang et al. et al.Wang et al.
et Wang
al. Elbashti et
(2020) [57] (2020)(2020) (2021)
[57] (2020)[61][61]
(2021)
[57] [57] (2020) [57] (2020) [57] (2020) [57] (2020) (2020)[57][57] al. (2019)
al. al.PEER
(2022) x[58](2022) 9,[58](2022) [58] (2022)
(2022) al.PEER
[58]
[58] (2022) (2022) [58] [58]
(2022) [58] (2022) [58]
Rudy
Children et al.
2022,Rudy
Children
9, x FOR Rudy
et
2022,al.
Children 9, et
Rudy
2022,
Children al.
FOR et
REVIEW
2022,xal.
9,PEERFOR Rudy
ChildrenFORet
PEER
xREVIEW al.
2022,
PEER 9,Rudy
REVIEW x FORet
REVIEW al.
Rudy
ChildrenREVIEW et al.
Rudy
2022,
Children
9, x FORet
2022,al.
Children 9,Rudy
PEER 2022,
Children
x FOR 9,et
REVIEW x al.
2022,FOR
PEER xPEER
FOR REVIEW
9,REVIEW PEER REVIEW
[46]
7 of 20 7 of 2
Wang et
Children
Aswehlee Wang
al.
2022,
(2018) [41] Wang
9,
et (2018) et
x al.
FOR
al.(2018)[41]Wang et
PEER
[41] al. et al.
REVIEW Wang et al. Wang
(2018) et al.
[41] Wang et al. Wang Wang et et al.
al. 6 of 20
(2020) [57] (2020)(2020) Ye et
[57] al. Ye
[57]
(2020) [57] et al.
(2020)Ye et [57]al. (2020)
YeYeetet[57]al.
al. Ye et
(2020) Ye[57]
et(2020)
al. al. et [57]
Ye al. (2020)Ye et [57]al. Fourie et al.
(2022)
Ayaz
Children [58]
2022,(2022)
et9,al.Ayaz (2022)
[58]
Children
x FOR et
PEER al.
Rudy [58]
(2022)
2022,
Children REVIEW9,[58]
et
2022,xal.
FOR9,(2022)
xRudy
Children
PEER [58]
FOR PEER et
REVIEW
2022, (2022)
al.Ayaz
Rudy
9, x FOR
Children
REVIEW [58]
et
et
2022, al.9,Children
al.
Rudy
PEER
Children et
x REVIEW
FOR
2022,(2022)
al.
Rudy
2022,
Children
PEER 9,
9, x FOR [58]
et x al.
REVIEW
2022,FOR
PEER(2022)
9,Rudy (2022)
xPEER
Children
FOR et
REVIEW al.[58]
[58]
REVIEW
2022,
PEER 9,REVIEW
x FOR PEER REVIEW 7 of 20
Wang et Wang
Ayaz al.al. (2020)(2016)
Wang
et al. Wang [59]
et (2016)
al. et al.Wang[59] (2016) et [59]
al.Wang(2016)
(2016) et [59]
[59] (2016)
Wang
al. (2016) [59]
et [59]
(2016)
Wang
al. [59]
et (2016)
al.Wang [59]
et al. (2011) [47]
et al.et [42]
Ye(2020) Ye(2020)
et al.Ye[42] et
[42]Yeal.
(2020) et al.
[57]et al. Ye et al.
(2020)et[57] Ye
(2020)
(2020) et al.
[42]
[57] (2020) [57] Ye
(2020) et al.
[57] Ye Ye
et
(2020)et[57] et
al. al. Gibelli et al.
(2022)et [58](2022) Zhao
(2022)
[58] 9,et al.
Zhao
x[58]
(2022) 9,[58] Zhao
(2022) [58] al. Zhao
9,Zhao
x FORet
(2022) et
[58] al.9,REVIEW
al.
Zhao
(2022) Zhao et al.
[58] et9,al.
Zhao
(2022) et
[58]al. Zhao
(2022) al.9,REVIEW
[58]
Rudy
Children
(2016) al.
2022,
[59]Rudy
Children
(2016) Rudy
et al.
Children
9, x FOR 2022,
9,(2016)
[59] PEER et
Rudy
2022,
Children al.et
FOR
REVIEW
[59]
(2016) 2022,xal.
[59] FOR
PEER 9,Rudy
ChildrenFORet
PEER
xREVIEW
(2016) al.Children
REVIEW
2022,
PEER
[59] REVIEW
(2016) 2022,
PEERRudy
Children
[59] x FOR et al.
Rudy
2022,
Children
PEER
(2016) Rudy
et
x FOR
REVIEW
2022,
[59]al.
PEER et
9,Rudy
Childrenal.
x FOR et
REVIEW
(2016) 2022,
(2016) al.[59]
PEER
[59] x FOR PEER REVIEW
(2018) [48]
7 of 20 7 of 2
Chong
Children
Chong etetal.
Chong
2022, al. x
(2021) FOR
(2017) Wang
et
[43] al.
PEER
[60] et
(2017) al.
REVIEW [60]Wang
(2017)
Table et al.
2.[60]Wang
Chong
Children
(2017)
(2017)
Risk etetWang
al.
2022,
[60]
[60]
ofetbias al.9,
(2017)
and x et
FOR
(2017) Wang
al.
[60] PEER et al.
REVIEW
[60]et [60]
(2017)
application Wang
(2017)
concerns et al.
[60]
according to the QUADAS-2 guidelines. 6 High
of 20
Ye et al.
(2020)et[57] Ye
(2020)et et Ye
al.
(2020)
[57] et Yeal.
[57]
(2020) et al.
[57] Ye et
(2020)et[57] al. Ye al. Ye et
(2020) [57] al. Ye
(2020)et al.
(2020)
[57] Ye
[57]
(2020) et al.
[57] Gibelli et al.
Zhao (2021) al.
Zhao
[43] Zhao
(2021) al. et
Zhao
(2022)
[43] al. et[58]al. Zhao (2022) al.
[58] Zhao
(2022)
(2021) et
[58]al.
(2022)
[43] [58]Zhao
(2022) [58]al. Zhao
(2022)Zhaoet et
al.
[58] al.
Rudy
Children
(2016) et al.
2022,
Children
[59] 9, x FOR
(2016) Rudy
Children
2022,
Zhao
(2016)
[59] Rudy
PEER 2022,
Children
9,etx FOR
al.
Zhao
[59]
(2016) et
REVIEW
2022,xal.
9,PEER
[59] FOR
et Rudy
PEER
Children
9, xREVIEW
FOR2022,
al.
Zhao
(2016)
risk,
PEER 9,Rudy
REVIEW
Children
et al.
[59] REVIEW
x FORet
Zhao
Zhao
(2016)
Unclear
2022,
PEERRudy
etal.
[59] al.9,
Zhao
(2016) Rudy
REVIEW
x FOR
Zhao
risk, Rudy
Children
et[59]PEER
al.et al.Low
Zhao
(2016) REVIEW
2022,
et
[59]al.9,Rudy Children
Children
x FOR
Zhao
(2016)
risk.
2022,
et al.2022,
PEER
[59] 9, x FOR
9,REVIEW
x FOR PEER PEER
REVIEWREVIEW 7 of 20 7 of 2
Wang et
Children
D’Ettorre
(2017) Wang
[60]al.
2022,
(2022)
(2017) 9,Wang
et
et
[60]al.Wang
x FOR
al.
(2017) et
PEER
[44] al.et
[60]
(2017) al.Wang
REVIEW
[60] (2017) et
[60]al.Children
(2017) Wang
2022,
[60] 9, x FORet Wang
al.PEERWang
(2017) et al.
REVIEW
[60] et al.
Wang
(2017) et
(2017) al.[60] (2018) [49]
[60] 6 of 20
D’Ettorre
(2020)et[57] D’Ettorre
(2020)
(2021) Ye[57]
(2020)
[61]et al.
[57]
(2021) [61] Ye
(2020)
(2021) et al.
[57]
[61] D’Ettorre
Ye
(2020)
(2021)
(2021) et al.
[57] Ye
(2020)
[61]
[61] (2021) et
(2020)
(2021) al.
[57]
[61] Ye
[57]
(2020) et
[61] et[57]
(2021) al.
[61] Ye
(2020) et
(2021)et[57]al.
[61] Kim et al.
Zhao
(2022)Study al.
[58]Zhao
(2022) Zhao
et al.
(2022)
[58] et
Zhao al.
x[58]
(2022) et al.
9,[58] Zhao
(2022) et al.
[58] Zhao et al.
Zhao
(2022) et al.
[58]Zhao
(2022) [58]al.
(2022) Zhao
x[58]
(2022) al.
[58]
Rudy
Children
Zhao 2022,
et
(2022) Rudy
Children
et9,et(2022)
al.
Zhao x FOR
al. Rudy
Children
2022,
Zhao
et al.etRudy
PEER 2022,
Children
9,et
Zhao
al.
(2016) FOR
REVIEW
2022,
al.et xal.
[59] FOR
PEER 9,Rudy
Zhao PEER
Children
xREVIEW
FORet
(2016) 2022,
PEER
al.
Risk
[59]9,Rudy
REVIEW
Children
REVIEW
xof
Zhao FOR
(2022)
(2016) 2022,
PEER
et
Bias
[59]Rudy
etChildren
al. 9,REVIEW
al.
(2016) x FOR et al.
Rudy
2022,
Children
[59]PEER
9, x FOR
Zhao
(2016) REVIEW
2022,
et
[59]al.9,Rudy
PEER
Children
Zhao FOR
REVIEW
(2016)Zhao2022,
etPEERet9,REVIEW
al.
[59] x FOR
al. PEER REVIEW Concerns 7 of 20 7 of 2
Applicability
(2018) [50]
Wang et
Dindaroglu
(2017) [60]al.
(2017) al.Wang
(2016)
(2017)
[60] Wang et
[45]
(2017) al.et
[60] [60] Wang
al. (2017)
Table et
[60]
2. al.
Wang
(2017)
Risk et
[60]
of bias Wang
al.
(2017)
and Wang et
[60] et al. et
Wang
al.
(2017)
application [60]al.Wang
(2017)
concernsTable et al.
[60]
according
2. Risk of tobias
the QUADAS-2
and application guidelines.
concerns according High to
Ye
(2020) et
(2021) [44]al.
[57]
[61] Ye et
(2021)(2020)
(2020) Ye
al.
[57]
(2021)
[61] et Yeal.
[57]
(2020) et
[61]et[57]
(2021)
Patient al.
[61] Ye
(2020)
(2021) et al.
[57]
[61] (2020)
(2021)
Reference [57]
[61] Ye
(2020)
Flow et al.
[57]
and Ye
(2020) et Ye
al.
[57]
(2021)Overall
[61] et al.
Ye
(2020)
(2021) et
(2021)al.
[57]
[61]
Patient[61] Liu et al. Reference Overall
(2022) [58] [44]
(2022) Zhao [58]
(2022) [58]al. Zhao
(2022) et al.
[58] Zhao
(2022) [44]
et
[58]al.
Zhao
(2022) et
(2022) al.
[58]Zhao
[58]
(2022) et
[58]al. Zhao
(2022) et al.
[58]
Rudy
Children 2022,
Children
9, x FOR
Zhao
(2016) 2022,
et Rudy
PEER
al.
[59]Zhao9, x FOR
(2016) Rudy
REVIEW
Zhao
et al.
(2016)
[59] Rudy
PEER
Children REVIEW
et[59]
Zhao
(2016) 2022,
al.et [59]al. 9,Rudy
Index Children
Children
x FOR
Zhao
(2016)testet
risk,
2022,
2022,
PEER
[59] 9,Rudy
9,FOR
al.Children
Zhao x FOR
REVIEW
x
Unclear
2022,
PEER
et PEER
Children
al.
Zhao
(2016) x REVIEW
9,REVIEW
risk,
FOR
et Rudy
2022,
Children
PEER
al.
[59] 9, x FOR
Zhao
(2016) REVIEW
2022,
et
[59]al.
(2016) 9,Rudy
PEER
Children
Zhao Rudy
x FOR
REVIEW
[59]
(2016)
risk,
2022,
et al.et9,REVIEW
PEER
[59] al.
x FORIndex PEER REVIEW
test 7 of 20 7 of 20
Wang et Wang
al. Wang
et al.Wang
selectionet al.et al.Wang et al. Wang
Standard et Wang
al. Timing et Wang
al. Lowet risk.
al. Wang et al. Unclear (2019)risk,
[51] Low risk.risk of bias
YeElbashti
Dindaroglu
(2020)
(2021) et [57] et al.
al.(2020)
[61](2021)
(2019)
Dindaroglu
Ye
(2020)
[57]
(2021)
[61]
[46]
(2017)
et Ye[57]
(2020)
[61]
(2021)
[60]
al.et [57]al. (2020)
[61]
(2017)
Table
(2021)Ye et 2.[60]
[57]
[61]
Dindaroglu
(2017)
Risk
al. (2020)
(2021)
[60]
Yeofetbiasal.(2017)
[57]
[61]
and
Ye
(2021) etYe [60] al.risk
(2017)
application
et(2020)
al.
[61] Ye
(2021) et [57]of
[60]
al.
[61]
bias
(2017)
concernsTable
Ye(2020)
(2020)
(2021) et selection
[60]
according
2.
al.Risk
[57]
[61] [57]
oftobias
the QUADAS-2
Liu
and application
et al.
Standard
guidelines.
concerns according High to
Zhao
(2022)
Akan et al.
[58]Zhao
(2022)
et al. Zhao
et al.
(2022)
[58] et[58]
Zhao
(2022) al.et [58]al.Children
Zhao
(2022) et al.Children
[58] (2022) [58]Zhao
(2022) et al.
[58]Zhao
(2022) Zhao
et
[58]al. et
Zhao al. et
(2022) al.9,REVIEW
[58]
Rudy
Children 2022,
Children
9,et et
x FOR
al.
Studyal.
2022,Rudy
PEER
9,
(2016) etx Rudy
et
FOR
REVIEW
al. al.
PEER
(2016)
(2016) et
Rudy
Children
Zhao al.
REVIEW
[59] 2022,
et al. Rudy
9, Children
x FOR
Zhao
(2016) et al.
2022,
2022,
PEER
et Rudy
9,
al.
Risk
[59] et9,
Zhao x
REVIEW
x FOR
al.
of FOR
Study2022,
PEERRudy
PEER
Children
(2016)
et
Bias al.
Zhao9, REVIEW
REVIEW
x FOR
et Rudy
2022,
Children
PEER
al. 9,
Zhao x FOR
REVIEW
2022,
et Rudy
PEER
al.Children
9, x
Zhao FOR
REVIEW
2022,
etPEER
al.
Risk x FOR
of Bias PEER
Applicability REVIEW 7
Concernsof 20 7 of 20 A
(2016)
Wang [59]
et
Children Wang
al.
2022, 9,Wang
et (2016)
al.Wang
x FOR et
PEERal.et [59]al.Wang
REVIEW risk, et (2016)
al.Wang Unclear
Children [59]
et al.
2022, 9,(2016)
(2016)
risk, x FOR[59] [59]Low
(2016)
Wang
PEER [59]
et al.
REVIEW (2016)
risk.risk,
Wang Wang [59]
et et Unclear
al. al. (2021)risk,
[52] Low risk. 6 of 20
YeFourie
(2017) [60]
et [45]
(2021)
(2020) [57]
et al. (2011)
(2017)
al.(2020)
Ye
[39] Ye[47]
(2017)
[60]
et(2020)
al.
[57]
[45] et
PatientYe[60]
(2017)
[57]
(2020)
(2021)
[60]
al.et [57]
[61]
(2017)
al. (2020) Ye
(2021)
[60]
et [57]
al.[61] Ye [45]
(2020)
(2021)
Reference et [57]
[61]
(2017)
al.(2020)
Ye
(2021)
Flow
[60]
etPatient(2017)
al.(2020)
[57]
[61]
and Ye et
(2021)
(2017)
[60]
al. (2020)
[57]
[61]
Overall
[60]
(2017)
Ye
(2021)
[60]
et Patient
al.ReferenceModabber
[57]
[61] Flow and Reference
Overall Overall
Patient
Zhao
(2022) et al.
[58](2022) Zhao
(2022)
[58] et
Zhao al.
x[58]
(2022) et[58]al. Zhao
(2022) et
[58] al.
Zhao
(2022) et
[58]al.
Zhao Zhao et al.et
Zhao
(2022)al. et
[58]al. Zhao
(2022) et
(2022) al.
[58] [58]
Rudy et
Children al.
Rudy
2022,
Children
9, x FOR
Zhao 2022,
et Children
PEER
al.
Zhao9, x FORREVIEW
2022,
Zhao
et al. Rudy
PEER
9,et
Zhao FOR
REVIEW
al. PEER RudyRudy
Children
REVIEW et
2022, al.
9,Rudy
Children
Children
xofFOR et
2022,
PEERal.
Rudy
2022, 9,9,
Children x xFOR
REVIEW FOR Rudy
2022, PEER
Children
PEER
9, x FOR REVIEW
REVIEW
2022,
Zhao 9,Rudy
PEER
Children
x test
et FOR
REVIEW
al. 2022,
PEER 9,REVIEW
xofFOR PEER REVIEW 7 of 20 7 of 20 7 of A 2
(2016)
Wang Study
[59]
Amornvit
et
Elbashti
Gibelli
(2017) [60]
(2016)
Wang
al.
et al.et et (2016)
[59]
Wang
et
Elbashti
(2018) al.[48]
(2017) Wanget
selection al.et
[59]
(2016)
[60]
(2017)
[59]
et
[60]
al.
IndexZhao
(2016)
al.Wang testet
(2017) et al.
Risk
[59]al.
[60]
(2016)
Wang Study
Elbashti
Standard
(2017)
Bias
[59]
et
[60]
Zhao
(2016)
Wang
al.
etTiming
(2017)
et
et al.
[59]Zhao
(2016)
Wang
al.
selection
(2017) [60] [60]
(2017)risk
et
[59]
et al.
IndexZhao
(2016)
al.bias
of
[60] Wang
(2017)
et
et al.
Risk
[59]
al.
selection
[60]Standard etIndex
Bias
Applicability test
al. (2016) Concerns
Timing Standard risk of bias risk of bias
selection
(2020) [57] Ye
(2020) et al.
(2021)et[57]
[61] Ye
(2021)et et Ye
al.
(2021)
[61] et Yeal.
(2020)
[61]
(2021) et al.
[57]
[61] Ye
(2020)
(2021) et
(2020) al.
[57]
[57]
[61] Ye et
(2020)et[57] al. (2020)
(2021) [57]
[61] Ye
(2020) et al.
[57]
(2021)
(2021)Overall
[61] Ye
(2020) Ye
et
[61] et
(2021) et
al.
[57]
[61] al. [53]
Zhao
(2022) al.
[58]
al.et Zhao
(2022)
(2019) xal. Zhao Patient
al.
(2022)
[58]
(2019) 9,et
Zhao al.et
x[58]
(2022) xal.
9,[58] Zhao
(2022) et
[58] Reference
al. Zhao
(2022)
xal. [58]
(2019) al.Flowet
Zhao
(2022) and
Patient
al.
[58]Zhao
(2022) et
[58]al. Zhao
(2022) Patient
Reference
x al.
[58] Flow andReferenceOverall Overall7 of Patient
Rudy etChildren
al.
Rudy
Zhao Akan
(2016) 2022, et9,al.
Children
al.
[59](2016) Children
FOR
2022,
Zhao
(2016)
[59] Rudy
PEER 2022,
Children
et
Zhao FOR
REVIEW
2022,
al.et
[59]
(2016) [59] FOR
PEER
al.
Index Rudy
9,RudyPEER
Children
xREVIEW
FOR
Zhao
(2016)test et al.al.
2022,
PEER
et
[59] Akan
9,Rudy
REVIEW REVIEW
Zhao FOR
(2016) etetChildren
PEER
[59] al. REVIEW
Rudy
al.
Zhao Zhao et Rudy
2022,
Children
al.et9,al.
Zhao
(2016) Children
x FOR
2022,
et
[59]al.9,Rudy
PEER
Index 2022,
Children
x test
Zhao FOR 9,et
REVIEW
(2016) 2022,
(2016) FOR
PEER
al.
[59] xPEER
9,REVIEW
[59] FORIndexREVIEW
PEER
Nightingale
REVIEW
test 20 7 of 2
Wang et Wangal.
(2017) et
Gibelli
[60]al.
et al. (2018)
(2017) (2017)
[60] Wang
[49]
selection [60]
(2017) et
[60]al. WangWang
(2017)
Table et et
[60]
2.al.al.
Wang
Standard
(2017)
Risk of et
[60]
bias Wang
al. Timing
(2017)
and et Wang
al.
selection
[60](2017)
application risk et
[60]al.
of Wang
bias
(2017)
concernsTable et al.
selection
[60]Standard
according
2. Risk ofto bias
the Timing
QUADAS-2
and Standard
application risk of
guidelines.
concernsrisk
bias of bias to
selection
according
High
(2020) [57] Ye(2021)
(2020)
(2021) [40]
et [46]
al. [39]
[57]
[61] Ye et(2021) Ye et
al.
[46] Yeal.et [57]
(2020)
[61]
(2021) al. (2020)
[61] Ye et [57]
(2020)
(2021) al.[57]
[61] Ye [46]
(2021)
(2020)
(2021) et [57]
al.
[39]
[61] Ye(2021)
(2020)
(2021) et [57]
al.(2020)
[61] Ye
[61]
(2021) et [57]
al. (2020)
[61] Ye et [57]
(2021) al.
[61] et al. (2020)
Zhao
(2022)et[58](2022) et al.
[58]Zhao Zhao
et al. et
Zhao
(2022) al.et [58]al.9,Zhao (2022)
(2022) etPEER
al.
[58]
[58] Zhao
(2022) et
[58]al.
(2022) [58]Zhao
(2022) et
[58]al. Zhao
(2022)Zhaoet et9,REVIEW
al.
[58] al.
Rudy al.
Rudy
Children
Zhao Akan et
Aswehlee al.
2022,Rudy
Children
et
Zhao9, x
al.et et
FOR
2022,
Zhao al.
PEER
Children
9, x
et
Zhao FOR
REVIEW
2022,
al.et PEER
al. Rudy
x
ZhaoREVIEW
FOR et al. Rudy
Rudy
Children
REVIEW
Akan
Zhao
Figure etet
2022,
et al.
al.
Rudy
al.
Zhao
2. Weighted Children
Children
9, x et
FOR al.
Rudy
2022, 2022,
Children
barchart PEER
9,
Zhao x et
9,
FOR
forLow x
REVIEW
2022,
the al.
FOR Rudy
PEER
risk PEER
Children
9, x
Zhao FOR et
REVIEW
ofrisk,
bias 2022,al.
REVIEW
PEER
and x FOR
application PEER
concerns REVIEW
risk,of the of selected studies 7 of 20 7 of 2
Wang et Wang (2016)
al. Fourie [59]
et
Amornvit
(2017)Kim (2016)
et
etal.
[60](2017) et(2016)
al.Fourie
al.
(2018)[59]
(2017)
[60]
et
Wang
[50] [59]
(2016)
al.
[60]
(2017)
[59]
et
[60]
(2016)
al.Wang risk, et
Wang
(2017)
[59]
et
[60]al. (2016)
al. Figure
Fourie
WangUnclear
Amornvit
(2017)
[59]
2.
et
[60]
(2016)
Weighted
etWang
al.
al. risk,
et [59]
et (2016)
barchart
Wang
al.
(2017)
for[59]
et the
[60]al. (2016)
risk
risk.
Wang of bias
(2017)
[59]
et
(2017)
and
al.
[60]
application
Unclear
[60]
concerns
[54] the Low
selected studies
risk.
Ye et al.
(2020) [57] Ye
(2020) et al.
[57]
(2021) (2020)
[39] [57]
(2021) Ye et al. Ye
(2020)Yeet et
(2021)et[57] al.al. Ye
according et
according
(2020)
(2020)
(2021) toal.
the
[57]to
[57]Ye et
QUADAS-2
the
(2020) al.
QUADAS-2
(2021)et[57]
[39] Ye et al.
guidelines.
(2020) guidelines.
(2021)et[57] Ye
(2020) et
(2021)et[57]al.
(2022) [58]
(2021)
Zhao [61]
et
9, (2011)
(2022)
xal. [58]
(2021)
al.
al.
Zhao
[47]
(2019) 9, (2011)
[61]
Zhao
et al. [47]et[61]
(2021)
Zhao
(2022) al.et [61]
[58]al.Children
Zhao
x(2022)
(2022)
[61]
al.
[58]
[58]
(2021)
9,Zhao
(2011)
(2022)
al.
[61]
et al.
Zhao
[47]
[58]
(2019) (2022)
[61]
al.
[58]Zhao
(2022)
[61]
[58]al. Zhao
(2022)
[61]
al.9,REVIEW
[58] Piedra-
Children
(2016)2022,Rudy
Children
Zhao
[59]
(2016) FOR
2022,
et
Study Rudy
PEER
al.
[59]Zhao x FORRudy
REVIEW
Zhao
et al. Rudy
PEER
Children
et REVIEW
Zhao
(2016) 2022,
al.et [59] Rudy
Children
al.9,Wang
Zhao FORet
(2016)
(2016) 2022,
2022,
PEER
al.
Risk
[59]
[59] 9,
Children
Zhao of
(2016)x
REVIEW
x FORFOR
Study2022,
PEER
et
Bias
[59]Rudy
PEER
Children
al. 9,REVIEW
(2016) et
x REVIEW
FOR al.
Rudy
2022,
Children
[59]PEER
9, x FOR
Zhao
(2016) Rudy
REVIEW
2022,
et
[59] PEER
al. et
9,Rudy
Children
Zhao
(2016)al.
x FOR
REVIEW
Zhao2022,
etPEERet al.
al.
Risk
[59] xofFORBias PEER REVIEW Concerns
Applicability
Cascon et
7 of 20 7 of 20 A
Wang et Wang
al.
(2017) et
Amornvit
Liu
(2018)
Gibelli Wang
et
[60]al.
al.
[41]
(2017)
et et
(2019)
Gibelli
al. et
[60]al.
[51]
(2017) et [60]
(2017)
al. [60] (2017) et
[60]al.
3.3. Wang
Wang
Amornvit
(2017)
Gibelli etet
[60]
Description
3.3. et al.
Wang
al. et
(2017)
Descriptional.
of theof et Wang
al.
[60](2017)
Included
the Included et
[60]
Studies al. Wang
Studies(2017) et al.
[60]
Ye et al.(2020)
Ye et [40]
(2021) al.(2020)
[57]
[61](2021) (2020)
[57]
(2021)
[61] Ye[57]
(2020)
[61]
(2021) et [57]
al. (2020)
[61] (2021)Ye et
Ye et [57]
[61]al.
al. (2021) Ye [40]
et [61]
al.(2020)
Ye et [57] al.(2020)
Ye et(2020)
(2021) al. (2020)
[57]
[61] Ye
[57]
(2021) et [57]
(2021)al. [61] al. (2020)
[61]
Zhao
(2022)et[58]
al.
Zhao
(2022)al.et al.
[58](2022)
(2019) [58] Zhao
Patient et al.2022,
Zhao Zhao
(2022) et
etPEER
al.
[58] al.
Zhao
Reference
(2022)
(2022)
al. et al.
Zhao
[58]
[58]
(2019) Flow
(2022)
9, Rudy
et al.
and Zhao
Patient
[58](2022) et
Overall
[58]al. Zhao
(2022) et al.
Patient
Reference
[58] Flow and Reference
Overall Overall
Patient
Rudy Ayaz et[48]
Rudy al. Zhao Rudy
et Rudy Rudy et9,characteristics
Rudy al. 9,Rudy
Zhao et al.
Zhao et al. Zhaoet al.
al.et al.
(2018) (2018) Children Children
2022, 9,Zhao 9, xet
Children
x(2016)
FOR al.
FOR
2022, Zhao
PEER
9,
Childrenx FOR
REVIEW et
REVIEW
2022,al.
Zhao
PEER
Children et
xstudies’
FOR al.
REVIEW
2022,Zhao
Children
PEER et
x FOR
REVIEW
2022,al. Zhao
PEER
Childrenare et
x test
FOR
REVIEW
2022,al.
PEER 9,REVIEW
x FOR PEER 3.REVIEW
al.[48] (2018) [48]
Index test The included
The included studies’ characteristics
Index shown
are shown in Table
in Table3. Among
Index Among the the
23 studies, 17 17
23 studies,
(2016) [59]WangAswehlee
(2016) [59]
et
Liu etWang
al. etWang
al. (2021) (2016)
etselection
Wang
[52] [59]
et (2016)
al.Wang et [59]
[59]al. Aswehlee
(2016) [59]Wanget
(2016) [59]
et (2016)
Wang
al. [59]
Wang
et al. (2016)
et
Wang al. [59]
et al. [55] test
(2017) [60]
(2017)
Ye et al.(2020)
Ye [60]
et [40]
al.(2020)
Ye et(2021)
al. et (2017) [60] (2017)
(2017)
Ye et [61] [60]
[60] (2017)
Standard
al. involved
YeYe[40]
etet
involved [60] (2017)
al.
al.
adult Ye Timing
et
volunteers
adult [60](2017)
selection
al.(2020)
volunteersYe risk
et [60]
of (2017)
bias
al. (2020)Ye et [57]
[39,42–45,48–50,53–61]
[39,42–45,48–50,53–61] [60]
selection
Standard
al.(the(the range of the
range Timing
number
of the Standard
number risk of
of participants risk
bias
of participants of bias
selection
Zhao et Children
al.
Zhao (2020)
(2021)
Gibelli[57]
[61]
et al.
al.[42]
(2021)
et Gibelli
al. [57]
[61] (2020)
Zhao [61]et[57]
(2021)
al. [61] (2021)
al. Zhao Zhao et
et[58]
al.al.(2020)
(2021)
Gibelli
Zhao al.[57]
[61]
et et (2020)
(2021)
al.al.
Zhao (2020)
et[57]
[61]
al. [57]
(2021)
Zhao [57]
[61]
et al. (2021)
Zhao et[61]
al.2022, Ross et al.
Rudy
Zhao (2022)
Rudy [58](2022) (2022)
[58]
et al.et [59]Rudy [58]
(2022) [58]
Zhao et al. Rudy (2022)
Rudy
Zhao et was 5–80).
was
Rudy
al.al. The
5–80). (2022)
other
The
Rudy [58]
6
other (2022)
studies
6
Rudy studies(2022)
[58]
were were
al. Rudy[58]
(2022)
conducted
conducted[58]
on impression
on impression casts of
casts the
of face
the [41,46,51],
face [41,46,51],
(2016)et[59]
al.
Zhao Akan
(2016)
Wang
et[59]
Aswehlee
Chong et
(2018)
Modabber
al.
2022,
Children
9,
(2016)
Wang
al.
et
[49]
x
al.
FOR
et (2018)
2022, PEER
9,
et al.[49]
al. (2016) Wang
x FOR
REVIEW
[53] et
PEER ZhaoREVIEW
al. (2016)
et[59] Zhao
Akan
Children
Children
(2016)
(2016)
Aswehlee
Wang
(2018)
et[59]
2022,
et
al.
Zhao
al.et et[59]
et[59]
2022, 9,
(2016)
Wang
al.
[49]
9,
x
Wang
x FOR
FOR
et
al.
Zhao
PEER
Children
PEER
(2016)
etand
Wang
al.
et[59]
al.and
REVIEW
REVIEW
2022,
et al.
Zhao
9,(2016)et
x
Wang
Children
Children
FOR 2022,al.
PEER
[59]
et al.
9, 9,
REVIEW
x FOR x FOR
PEER PEER
(2018) [56]
REVIEWREVIEW 7 of 20 7 of 2
(2017) [60] Ye (2018)
(2017) et [60]
al. [41]
Ye et Ye
al. (2017)
et Yeal. et [60]
al. (2017)
(2017)
Ye et [60]
[60]
al. (2018)
(2017)
mannequin
mannequin [41]
[60] (2017)
heads
Ye heads
et [60](2017)
[40,52],
[40,52],
al. Ye et [60]
human
Ye
al. (2017)
human
et cadaver
al.
Ye [60]
cadaver
et al. heads heads [47].[47].
However,
However, 2 studies [48,49]
2 studies [48,49]
(2020)
(2021)et[57]
Zhao (2020)
[61]
(2021)
al.
Zhao et[57]
[61]
(2021) al.[39] et al.(2020)
Zhao
al. (2021) [57] [61] Zhao (2020)
(2020)
(2021)
(2021) et [57]
[57]
[61]
[61]
al. (2020)
(2021)
(2021)
Zhao
Zhao [57]
[61]
et
al. (2020)
(2021)
[39]
et al.
al.
Zhao et[57]
[61](2020)
(2021)
al.
Zhao et[57]
[61]al. (2020)
(2021)
Zhao et[57]
[61]
al.9,Children
Zhao et
Children (2022)
al.
2022, 9, Ayaz
Zhao [58]
(2021)
xKim
FOR (2022)
et
et[59] [43]
et
al. al.
Children
PEER Kim
al. [58]
REVIEW
2022, (2022)
etChildren
9,al.
Rudy
Zhao Rudy
x[59]
FOR et[58]
Children
2022, et al.
Rudy
Zhao
al.Children
PEER 9,Zhao
2022, 9, xet
xREVIEW
FOR et
al.
FOR
2022,
PEER al.(2022)
xKim
included
included
Ayaz
Rudy
Zhao
PEER
Children
9, REVIEW
FOR [58]
et
both
et
et
REVIEW
2022,
PEER (2022)
al.
both
al.
Rudy
al.
Zhao
Children (2022)
volunteers
9,REVIEW et
FOR[58]
xvolunteers
al.
2022, [58]
(2022)
and
Rudy
Zhao
PEER [58]
mannequin
9, xand et
FOR
REVIEWal. (2022)
mannequin
Rudy
Zhao
PEER
Childrenheads.
et
REVIEW [58]
heads.
2022,al. x FOR PEER 2022, 9, x FOR PEER
REVIEW REVIEW
7 of 20 7 of 2
Wang et (2016)
Wang
al. et
Amornvit
Nightingale (2016)
al. et (2016)
[59]
et al. (2020) (2016)
Wang [59]
et al. (2016)
WangWang [59]
et et
al.al.
Wang
Amornvit et (2016)
Wang
al. et [59]
et (2016)
Wang
al. (2016)
[59]
et al. Wang[59]
(2016) [59]
et al.
(2017) [60] (2017) [60]
(2018)
D’Ettorre
Ye(2018)
et [61] (2017)
[41]
al. [50]
Ye(2018) [60] [54]
et al. [50] Ye (2020)
et [57]
al. [57] (2017) [60] (2017) (2017)[60]
(2018)
Ye et [57]
(2018) [60]
al.(2017) [60]
[41]
[50] Ye et (2017) [60] (2017) [60]
Ye[57]
al. et(2020)
Yeal. et [57] al. (2020) Ye et [57] al.
(2021) [61] (2021)
Zhao (2020) [42] Zhao (2021)
(2020)
et al.et [61] (2021)
(2020)
(2021) [61][61]
[57] (2021)
(2020)
(2020) [61] (2021)
[42]
(2020) [61] (2021) [61]
Zhao (2021)
et al. [61]
(2022)et[58]
Zhao (2022)
al.
Zhao
Rudy al.et
Ayaz
al.
[58]Zhao
(2019)
et[59]
al.
Zhao
et
Rudy
Childrenal.
et
et
al.Children
al.
Children
2022,
Zhao
(2022)
9, 2022,
Children
x FOR
al.2022,
[58]
9,[59]
x FOR
2022,
PEER
Zhao
9,Zhao(2022)
(2022) etPEER
9, xet
PEER
xREVIEW
FOR FOR
al.
[58]
al.
Table
[58]
REVIEW (2022)
Zhao
Zhao
Rudy
Ayaz
Rudy
PEER
Children
REVIEW (2019)
etet
REVIEW
2022,
Zhao
al.3. Characteristics
Table 3.[58]
Characteristics
(2022)
etChildren
al.al.9,not
Zhao
al.
et
et
Children
x FOR
ofal.
[58]
2022,
Zhao
the
al.
of included
(2022)
Zhao
Rudy
2022,
Children
PEER9, x FOR
et
the included
et
9, [58]al.
x al.
REVIEW
2022, FOR
PEER
Zhao
studies.
Zhao
9,RudyxPEER
Children
IC:et
studies.
(2022) Rudy
FOR
REVIEW et
IC:
[58] al.
impression
impression
al.et9,REVIEW
REVIEW
2022,
PEER al.
cast,cast,IPX:IPX:
x FOR PEER REVIEW
iPhone iPhoneX; MH: X; MH: MannequinMannequin
7 of 2
(2022)
(2016)Liu
Chong et et
(2016)
al.
et etal.
Liu [59]
et
al.al.(2017) (2016)
al.
Wang Wanget al.et(2017)
Wang
al. [60] et al. (2016)
WangLiu et
ChongetetWang
head; head; NA: [59]
NA: (2016)
al.
al.
not al. et (2016) [59]
applicable; Wang
al.
applicable;[59]
(2016) SPG: [59]
et
SPG: (2016)
al.Wang
stereophotogrammetry. [59]
et
stereophotogrammetry. al. Stationary
Stationary Stereophotogrammetry
Stereophotogrammetry
Ye et al.(2017)
Ye et [60]
Piedra-Cascon
al.
[40](2017) [60]
(2020)
Ye[60]
(2017)
[55]
et [60]
al. Ye Ye et et
al. al. Ye et
[40] al. (2017)
Ye et [60]
al. (2017)
Ye et (2017)
[60]
al. [60]
(2017)
Ye et [60]
al.
(2021) [61] (2021)
Zhao [61]
(2020)
(2020) [57]
et[44]
(2019) (2021)
al.[42]
(2020)
Zhao
[51] (2019) [61]
et[57]
al.[51]
Zhao et[58] (2021) [61] Zhao
al.2022, (2021)
(2021)
(2020)
(2020)
(2020)
a1/Portable
a1 /Portable
(2019) et [61]
[61] (2021)
[57]
[42]
[57]
Stereophotogrammetry
al. [61]
Stereophotogrammetry
Zhao
[51] Zhao et (2021)
(2020)
al.etPEER
Zhao al.FOR [61]
[57]
a2
et ; al.
a2
(2021)
(2020)
Smartphone
; Zhao (2020)
Smartphone [61]
[57]
et [57] lightlight
structured
al.structured scanner scanner b1 /photogrammetry
b1 /photogrammetry b2 ; b2 ;
Zhao (2021)
et[59]
al.[43]
Zhao Zhao (2022) (2022)
et Children
al. al.et[58] (2022) [58] (2022)
(2021) [58] (2022)
[43] [58] (2022) [58]
Zhao (2022)
et al.light[58]
Rudy
(2016)et[59]
al.
(2016)
WangAswehlee
Chong et
Rudy
Wang
al.
et etet
al. et
al.Rudy
al.
Zhao
(2016) Rudyet[59] Zhao
Rudyal. 9, xet
Children
(2016)
(2016) FORal.
2022,
[59] Rudy
PEER
Children
Tablet
[59] (2016)
Aswehlee
Tablet
Wang
Wang
Chong
et
9, xconnected
Children FOR
REVIEW
[59]
etet
Zhao
al.
Rudy
2022,
2022, PEER
Children
9,to
(2016)
connected
al. et
al.
9,x x et
aREVIEW
FORFOR
2022,al.
Zhao
aChildren
PEER
portable
to[59] (2016)
portable
Wang
et[59]al.
REVIEW
9, xdual-structured
REVIEW
2022, PEER Zhao
9,Rudy
Children
x FOR
(2016)
dual-structured
et al. Wang
REVIEW
Wang
et
2022,
PEER
[59]
et
al.
light
al.et
Rudy
9,REVIEW
scanner x FOR
scanner
al.
et; Laser
PEER
b3; Laser
b3 al. REVIEW
scanner scanner c; Structured
c; Structured lightlight
Dindaroglu
(2017)Liu[60]
Ross (2017)
et
et
D’Ettorre al. (2018)
al.Liu [60]
et (2017)
[56]
al.
Ye etYe [60]
al.et [57]
al. Ye et al. (2017)
scannerLiu
scanner
Figure [60]
etet[57]
D’Ettorre
Ye al. (2017)
d;al.
d; Computed
Ye
Computed
2. Figure(2017)
et
Weighted [60]
al.
tomography [60]
(2017)
Ye
tomography
2. Figure ;[60]
ete(2021)
Weighted
barchartal.
RMSE: (2017)
e; RMSE:
2. Ye
Weighted
for
barchart
Figure
Figure the [60]
et [57]al.
root-mean-square
root-mean-square
risk
2. 2.Figure
for
barchart
Weighted error
of(2020)
Weighted Figure
the bias risk
2. (surface-to-surface).
error
for
and (surface-to-surface).
2.barchart
Figure ofthe
Weighted Weighted
barchart application
bias risk
2. and
for
ofthe
Weighted
for
barchart
Figurebarchart
application
bias
theconcerns
risk
riskand
2.
for
barchartof
for
Weighted
ofthe application
biasbias
concerns
of
the
risk the
for
and and
risk
of selected
barchart
the appli
of
conce
of
applica
bias riskbia
th
an
(2020)et[57]
Zhao (2021)
al.
Zhao et[61](2021)
(2020)
al.
al. (2021)
[61]
[57] Zhao[61]
(2021)
(2020) [61]
(2020)
[57]
et (2021)
(2020)
al.Children
ZhaoZhao et[61]
[57]
et
al.al. (2020)
Zhao et
al. (2021)
(2020)
al.
Zhao et[61]
[57] (2021)
al.
Zhao et[61]al. [61]
(2021)
(2020)
Zhao [61]
et al. [57]
(2022)
Children 2022, 9,et
Zhao al.
et[58]
(2021)
(2021)
Children
x FOR
2022,(2022)
[43]
(2016)
al.
Zhao
Rudy
[52]
Children
PEER
9, x et [58]
Rudy
(2021)
FOR
REVIEW
2022,al. Zhao
PEER et
Rudy
[52]
Children
9, x al.
Rudy
FOR et
REVIEW al.
2022,
PEER et
9, xal.
REVIEW
FOR 2022,
PEER (2022)
(2022)
(2021)
Zhao
Rudy
(2021)
Children
9,REVIEW
x FOR et [58]
[58]
[43]
2022,
according al.
PEERZhao
Rudy
[52]
Children
9, Zhao
Rudy
REVIEW
x
according
to et
FOR
the2022, (2022)
al.et
Zhao
Rudy
Children
PEER9,
QUADAS-2al.
x et
FOR
according
to [58]
REVIEW
2022,
the al.
PEER (2022)
Zhao
Rudy
Children
9,
QUADAS-2 x (2022)
FOR
REVIEW
guidelines.
to according
the
according [58]
et
2022,
PEER al.
QUADAS-2 [58]
9, REVIEW
x
guidelines.
to
according
to FOR
according
thethe PEER
QUADAS-2
guidelines.
QUADAS-2
according
to the to REVIEW
the
QUADAS-2 QUADAS-2
guidelines.
guidelines.
to according
the 7 of
QUADAS-2 20 guidelines.
guidelines.
to the 7 of QUADAS-220 7 of 2
guidelines.
Wang (2022) et al. (2016) (2016)
Wang [59]al.et[59] (2016)
al. [59] (2022)
(2016) et
[59] al.
(2016) [59] (2016) [59] N° (2016) [59] al. Wang et al.
(2017)et al.
[60] Rudy
(2017) Wang
al.et(2020)
[60]
(2018)
D’Ettorre
YeModabber
et [45] [41]
Ye
et al.
al. (2020)
et(2020)
Wang
al. [57]
(2017) et
[60] Wang (2017)
(2017) et [60]
2.[60]
al.Wang
(2017)
(2018)
Tested et
2.Tested
Ye
D’Ettorre
Ye et
Figure et
Wang
[60]al.
(2017)
[41]
al.
al.2. Weighted
et
[60] al.
(2017)
Reference
Reference
Ye etfor
barchart[60]
al.
Wang
for

(2017)
Yeand
et
[60]
Measuremen
Ye
et
the ofMeasuremen
al.et
risk al.of bias andthe application Figure
concerns 2. bias Weighted
of the selected barch
(2022)
(2021)
[58]
[61]
[44](2021)
(2022)
[61]
[57]
Modabber
[58]
(2021)
Study
Zhao[57]
(2020)
(2022)
(2020)
Study
Zhao al.Figure
[61]
[57]
et
(2022)
[58] et[57]
Aim
[58]al.
Zhao
(2022)
Aim Figure
Weighted
et
[58] al.(2021)
(2020)
Modabber
Zhao
(2022)
Method
Weighted
barchart
[44]Figure
et
Method
[61]
[57] (2021)
(2020)
Sample
al.
[58]Zhao
(2022)
2.for
(2021)
(2020)
Sample barchart
etWeighted
[61]
[57]
[58]
Figure
the[61]
(2021)
al.
Zhao[57]
(2020)
Method
risk
Method et2.barchart
ofthe
[61]
[57] Weighted
Landmark
al. bias
Figure
risk
(2021)
(2020)
Landmark
Zhao
(2022)
for
ts
2.
[61]
[57]
et
[58]
barchart
application
bias
the
Weighted
al.
(Metrics)
ts
risk
and
(Metrics)(2022)
of
for
application
Figure
bias
concerns
barchart
[58]REVIEW
and
risk2.Conclusions
ConclusionsWeighted
application
of
for
concerns
ofFigure
bias
the
the selected
and
risk
2.barchart
ofWeighted
concerns
application
of
the studies
selected
for
and of
barchart
the concern
the
applica
studie
risk
sel
Zhao et al.
Children Zhao
2022,
Children
9, x et[59]
Ayaz
FOR
(2022)
(2016) al.
2022, et
PEER
et9,
(2016)al.
x
al.FOR
REVIEW
[59] ZhaoRudy
PEER
Children et al.according
REVIEW
2022, etZhao Zhao
al.
Rudy
Children
Children
9, x FOR et
2022,
PEERal.al.
etaccording
2022,
to Zhao
the Rudy
Ayaz
Rudy
9,
Children
9, x
REVIEW
x FORFORet
according
QUADAS-2
(2016)
(2022)
(2016) to et
et
2022,al.
PEERZhao
al.
al.
Rudy
PEER
Children
according
the
[59]
et
[59] 9,
al. to et
REVIEW
REVIEW
x FOR
the
QUADAS-2 2022,al.
Zhao
Rudy
Children
PEER9,
QUADAS-2
guidelines.
to according
the
(2016) x et
FOR
REVIEW
2022,
QUADAS-2 al.
PEER
guidelines.
[59] Zhao
Rudy
Children
to9,
s x FOR
REVIEW
guidelines.
according
the
(2016)s et
2022,
PEER
QUADAS-2
guidelines.
(2016) [59] al. to9,REVIEW
x FOR
the
[59] PEER
QUADAS-2
guidelines.
according guidelines.
to according
the 7 of 20
according
QUADAS-2 to the 7 of
to the
QUADAS-220QUADA
guidelines.
Elbashti
et Wang
al. (2016)
Wang et al.
et al.et Wang
et (2016) et Wang
al.Wang al. et
(2017) al.et[60]al.
(2017) [60] et3.3.
Wang Description
et
al. (2016) Wang
al. 3.3. Wang Description
et ofet3.3.
Wang
al. theal.Included
Description
et of
al.the3.3.
Wang 3.3.Studies
Included
Description
of
Description
et the
3.3.Studies
al. Included
3.3. ofDescription
of3.3.
Description thetheStudies
Included
Included
Description
of the of3.3.the
Studies
Studies
Included Included
Description
of the Studies
Included Studies
of the Studies Includ
Ye et [61]
(2021)
Dindaroglu
al.(2021) [61]
(2020) Ye (2022)
[42] et al. [58](2017)
(2021) etYe
Ye (2020) [60]
[61]etFigure
al.To al.
ToYe et
(2021)
(2021)
analyze
[57] (2020) 2.
the
analyze al.
Figure
Weighted
[61][61]
[57]
the Dindaroglu
(2017)
Ye
(2021)2. et
Figure
(2020)
(2020)
(2020)
[60]
al.
Weighted
[61]
(2017)
2.
barchart
Figure Ye
(2021)
[57]
[42]
[57] (2020) etfor
Weighted
2. [60]
[61]
[57]
(2017)
al.(2020)
the barchart
barchart
Weighted
Figure
(2021) riskfor[60]
2.barchart
ofthe
[61]
[57]
(2017)
for
Weighted
bias Yeand
Figure
risk
(2021)
(2020) et
thefor [60]
al.
risk
of[57]
2.
[61]barchart of
application
bias
the
Figure
Weighted
risk
andYebias 2.
ofetFigure
for andal.
the application
application
Weighted
bias
concerns
barchart and
risk2. Weighted the concerns
application
of
for
concerns
barchart
ofFigure
bias
the selected
and
risk
2.
for
barchart
ofWeighted
concerns
application
of
the
the bias of
studies the
selected
risk for
and of
of selected
barchart
the concern
the
applica
bias
studie
risk
selan
Zhao et
al. [44]
al.
Zhao
(2019)
[53](2022) et al.
(2022)
[58]
[53] [58]
(2022) (2022)
[58] [58] Zhao
Zhao
(2022) [44]
et
[53] etal.
[58]
The al.
(2022) (2022)
included [58]
The Zhao[58]
(2022)
9,included
et
studies’[58]
The al. Zhao
(2022)
included Zhao et
7[58]
characteristics
studies’ TheThe 7et
al.
linear al.
linear
characteristics
studies’
included
includedx The
Depth
are Depth
The sensor
characteristics
shown
studies’
included sensor
included
studies’ The are smartphone
in smartphone
shown
characteristics
Table
included studies’
characteristics
studies’ are
7The
camera
3.
in camera
shown
Among
Table characteristics
characteristics
studies’
included are
are 3.
in the
Among
Table
shown
characteris
shown 23
studies’ 7arestu
3.
in in
th
asA
Rudy
Children et al.
2022, 9,et
Rudy
(2016) [59] Chong al.
et
x FOR (2016)
al.
Rudy
Children et
PEER REVIEW
2022,al. Zhao
Rudy
Children Zhaoet
Children
9, x FOR al.
2022,
PEER et
2022,
accuracy
(2016) al.
9,Rudy
accuracyZhao
Children
according
[59] of xet
9,a of
xREVIEW
FOR depth
a et
FOR
2022,
PEER al.al.
depth
according
to et
Zhao
Rudy
PEER
Children
9,
iPhone
the al.
REVIEW (2016)
et
REVIEW
xaccording
FOR 2022,
X
iPhone
QUADAS-2
to al.
PEERZhao
Rudy
Children
X
according
the 9,
26 REVIEW
to 26et
the
QUADAS-2 toal.
Zhao
Rudy
x guidelines.
FOR2022,Children
PEER
QUADAS-2
according
the et
xQUADAS-2
FOR
REVIEW
2022, al.
PEER
guidelines. Zhao
Rudy
Children
to9, x FOR
REVIEW
guidelines.
according
the et
2022,
PEER
QUADAS-2
guidelines.al. 9,REVIEW
according
to the FOR
3(2016)
PEER
QUADAS-2
guidelines.
according
to[59]the REVIEW
QUADAS-2guidelines.
to according
the of
QUADAS-2 20
guidelines.
to the QUADAS-2
guidelines. of 2
et (2016)
Ye[60]
Dindaroglu
(2017) al. [59]
et(2016)
al.
(2017) [59](2016)
[60] Wang [59]3.3.
et (2016)
Wang
al. [59]
Description
3.3.
et with(2016)
al. 3.3.
Description
Wang
Wang
Chong of
(2017)
Dindaroglu
(2017) the [59]
3.3.
etet (2016)
Description
Included
Wang
al.
[60]
[60] Description
al. of the [59]
et of
Studies
Wang
al. the
Included
3.3.
(2017) ofIncluded
Description
the
et Studies
[60]
(2016)
3.3.Studies
al.Included
Wang
(2017)
[59]
Description
of the
et Studies
distances,
(2017) al.Included
distances,
[60] [60] 3of may3.3.
the
Studies
beIncluded
may Description
employed
be employed 3.3.
Studies
for 3D 3.3.
of facial
for the
3D Description
Description Included
facial of the of Includ
Studies the In
(2020) [57]
(2020) [46]
Nightingale
[45]
[57] Ye
(2020) Ye
et [57]
al.
Akan
Nightingale et etal.
Ye
Akan
(2021)
(2020) etYe
et
al.
(2021)al.etsensor
sensor
al.
[61]
[57] al.
smartphone
[61] smartphone
(2021)
(2020) [57][61] Ye
(2021)
(2020) etdepth-
depth-
Nightingale
with
involved
[45] al.
Figure
[61]
[57] Ye
participan
(2021)
(2020) Ye[57]
et
participan
involved
adult
2. al. et
3dMD
Weighted
[61] Yeal.
3dMD
Figure
(2021)
(2020) etface
face
volunteers
involved
adult al. volunteers
barchart
[61]
[57] Ye
Weighted
Figure
(2021)
(2020) et
for
angles al.
2.9[39,42–45,48–50,53–61]
adult
involved
involved volunteers
2.
[61]
[57] [39,42–45,48–50,53–61]
the adult
involved
adult
barchart
Figure
Weighted
(mean involved
risk 2.
ofvolunteers
[39,42–45,48–50,53–61]
volunteers
involved
adult
for
Figure
Weighted
bias
scanning. adult
the
barchart (the
and risk
2.
However, range
volunteersvolunteers
[39,42–45,48–50,53–61]
of
Weighted
application
for
barchart
Figure
the (the
bias of
[39,42–45,48–50,53–61]
adult
involved complex riskrange
the
volunteers
and
2.
for [39,42–45,48–50,5
barchart number
Weighted (the
[39,42–45,48–50,53–6
adult of
application
concerns
of the bias risk range
the
volunteers
[39,42–45,48
for
and of
of of
numbe
barchart
the (the
the parti
(the
of
conce
applica
bias risk th
selra
an
Zhao et al. (2021)Zhao [43] et al.Zhao Zhaoet al.
(2022) et[58]al.
Zhao(2022)
The etwith
al.
[58]
includedZhao
(2022)
(2022)
(2021)
The et al.
[58]Zhao
[58]
included(2022)
[43]
The
studies’ tsincluded
The et al.
F,[58] (2022)
characteristics
studies’
included a1 [58] Zhao
studies’
The (2022)
characteristics
studies’
9 etangles
characteristics
included
are shown [58] al.
characteristics
The studies’
are Zhao
(mean
included
in shown are
Table etstudies’
al.The
scanning.
shown
characteristics
are
3.
in Among in
However, complex
Table 3. The
Among included the 23 stud
stu
6Table
shown included
characteristics
3. 6in
the
are
Among Table
The23
shown
studies’
studies,
included
63.
the Among
in
are
23 Table
characteris
17
shown
studies,
studies’the3. Am
in
23c1
(2021) [39]
(2021) [39] cameracamera with sensingsensing ts (16 F,
(1610 a1
Rudy et al.Rudy
Zhao et
et al.
al. Fourie
al. (2016)
et al.Zhao
et
(2016)
et (2020) al. et al.
(2016)
[59] Rudy[59]
(2016) et[59]
(2016) al. Rudy
[59]Rudy et
et3.3.al. al.
Rudy
Zhao
et
Zhao
(2016)al.
was et
(2016)
et Rudy
al.
5–80).al.
according
[59] (2016)
wasof(2016)
The et to10
[59] al.
Rudy
Zhao
5–80). theaccording
(2016)
other
was Theet
theQUADAS-2
[59] [59]
6et al.
5–80).
studies
other Rudy
Zhao
to
(2016)
was was
TheZhao
the
according et
[59]
6Description
were
studies
5–80).
other
5–80). al.
QUADAS-2
guidelines. et
was al.
according
to the
conducted
6The
Thewas
were
studies
5–80). guidelines.
QUADAS-2
according
to
other
5–80).
other
was the
conducted
The on
6were QUADAS-2
The
5–80). guidelines.
to
impression
studies
studies
other according
the
conducted
other
The
wason were QUADAS-2
were guidelines.
impression
6needs
studies
other
5–80).studies
caststo the
conducted
on
conducted
The
were of QUADAS-2
guidelines.
impression
studieswere
the
casts
other face
conducted
on condu
on
were 6of imp
imp[41
stuth
ca
al.et al. (2020) et al. (2020)
Wang Elbashti et 3.3. Description 3.3.
Description
of
Elbashti Description
the3.3.Included
b1etDescription
the of
Studies
Included
3.3. Included
Description
of the Studies
Included
3.3. deviations,
Studies
of theStudies
3.3.Included
deviations, structures
Description
of 3.3.
the
Studies
Included visualization
Description
structures of the 3.3.
Studies
Included
visualization Description
of the Studies
Included
needs of the
Studies Includ
(2017)et Wang
[60] Zhao Wang
al.
et
(2017)
D’Ettorre
et
(2017) al.
[60] Wang
[60]
(2017) et
(2017)
Ye[60]al.
et
Wang
[60]
(2017)
conventional
conventional
al.
involved
Figure Ye 2.
et
[60]
et
al.Wang
al.
involved
adult
Figure
Weighted(2017)
cameracamera
Ye
D’Ettorre
Ye et
involved
Figure
2.
et
b1
et
Wang
[60]
al.
volunteers
adult
al.
(2017)
al.
involved
2.
Weighted
barchart
Figure
M)
Ye M)
et
adult
Weighted
2.
et
volunteers[60]
al.
for
Wang
al.
barchart
WeightedYe
volunteers
Figure
the et
[39,42–45,48–50,53–61]
adult
involvedbarchart
riskforal.
volunteers
2.
al.
barchart
ofthe
Wang
(2017)
for
Weighted
bias Ye
[39,42–45,48–50,53–61]
adult
involved
risk et
the
andfor
et
of[60]
al.
[39,42–45,48–50,53–61]
volunteers
al.
[39,42–45,48–50,53–61]
(the
risk adult
barchart of
application
bias
the
Figure (2017)
range
risk
and volunteers
bias 2.
of
for [60]
[39,42–45,48–50,53–61]
(the
involved
of
and
application
Figure
Weighted
bias range
the
concerns
the (the
application
Figure
and
risk2. range
number
Weighted 2.
application
of
concerns
barchart
of Figure
bias
the involved
[39,42–45,48–50,53–61]
adult
(the
of involved
the range the
volunteers
of
number
Weighted number
participants
concerns
selected
and 2.
for of
adult
(the
barchart
of Weighted
concernsthe
application
the
the adult
of
range
studiesvolunteersvolunte
[39,42–45,48
number of
participant
barchart
of the
selected
risk for of
of of
barchart
the (the
selected parti
for
concern
the
bias the
studieof
risk
selra
th
an
(2020) [57] (2021)
(2020) [61]
[57]
[45]
(2011) (2021)
[47] et
Zhao [61]
Zhao (2020)
et al.
al.Zhao Zhaoet[57]
al.et (2020)
(2020)
al. [57][57] (2021)
(2021)
(2020)
Zhao [45]
mannequin
et [61]
[61]
[57] (2020)
al.
Zhaomannequin
Zhao et[57]
heads (2021)
(2020)
al.etmannequin
Zhao al. [61]
et[57]
[40,52],
heads al. (2021)
(2020)
and
mannequin
Zhao (2021)
mannequin
[40,52],
heads [61]
et[57]
RMSE)
human [61]
RMSE)
and
[40,52],
mannequin
al. mannequin
heads
cadaver
headshuman and
mannequin improvements.
[40,52],
[40,52], cadaver
headsheads
humanimprovements.
heads andand
[47].
[40,52], cadaver
mannequin
heads heads
[40,52],
human
human However,
and [47].
[40,52], and
headscadaver
cadaver
human
heads However,
human
and2studies,
[47].
studies
[40,52],
cadavhead
heads
huma Ho
cad 2
(2022)et[58](2022)al. [54]
(2019)
[58](2022) [54]
et[58] (2022) et[58]
stationary
stationary
(2022)The et[58]
included al.
(2022)
The [54]
(2019)
[58]
The
included(2022)
studies’ included
The [58] (2022)
characteristics
studies’
included studies’
The [58] (2022)
characteristics
studies’ characteristics
included
are shown [58]
characteristics
The studies’
are
included
in shown
The are
Table shown
characteristics
included
studies’
are
3.
in The
Among
Table
shown in Table
included
characteristics
studies’
3. in
the
are
Among Table
The 3.
23
shown Among
characteristics
studies’
studies,
included
3.
the Among
in
are
23 the
Table
characteris
17
shown 23
studies’the stu
Rudy
Zhao
Wang et Wang
al. (2022)
al. Gibelli
Zhaoet
Elbashti
Rudy
Zhao
etal.al.
al.
et et
(2021)
al.Rudy
Zhao
Wang
[61]
Rudy
Zhao
(2016)
et was
al.
Rudy
et[59]
Wang
al.
al.according
Zhao 5–80).
(2016)
Wang et
al.
according
to
was The
[59]
et
al.
Rudy
Zhao
the
al. was
5–80).
other
(2016)
(2022)
(2016)
Wang
Elbashti
included
et
according
QUADAS-2
toet
Rudy
al.
Zhao
according
the
5–80).
was
The6(2016)
[59]
et
[59]
3.3. al.
Wang
al.
et to
studiesThe
5–80).
other
Description
included
both
et
the
QUADAS-2 [59]
et
Rudy
toal.
QUADAS-2
guidelines.
according
the
other
6(2016)
The
was
were
Wang
al. studies
3.3.
volunteers of
included
both
et
QUADAS-2
6et
other
5–80).
[59]al.
guidelines.to
studies
conducted
were
Description
the al.
Rudy
Zhao
guidelines.
the
6Wang
The
wasstudies
(2016)
Included
3.3.
volunteers
andboth were
included
included
et
QUADAS-2
guidelines.
conducted
other
5–80).
on
[59]
of
Description
et
mannequin
al.
according
were
the
Studies
3.3.
al.
volunteers
and
included
both
Zhao
conducted
impression
6both
The
studies
conducted
Included onother
Description
of
mannequin
included
et
guidelines.
was
3.3.
the
heads.
volunteers
and
volunteers
al.
according
to the
on
impression
werecasts
6both
Studies
included
both Includedaccording
QUADAS-2
5–80).
Description
mannequin of to
impression
studies the
heads.
volunteers
and
according
conducted
on of the
The
was
impression
the
casts
3.3.
volunteers
and
both were
Studies
Included
included
to
QUADAS-2
was
other
5–80).
face
mannequin
heads.
mannequin
volunteers
the
of
Description
of the
and
QUADAS-2
guidelines.
caststo
5–80).
conducted
on the
[41,46,51],
the of
6mannequin
and
bothThe
impression
Studiescasts
Included studies
face QUADAS-2
guidelines.
Thethe
other
of
mannequin
heads.
heads. of
volunteers
and other
face
[41,46,51
the on
were
the
Studies
manneq 63.are
guideimp
Includ
Am
in
cast
stu 23
6c1sah
[41
fac
head
Ye et [57]
al. Ye et Piedra-
[46] et
Yeal.
(2017)
al.(2020) et(2017)
[60]
Piedra-
al. (2020)
Ye[60]
(2017) (2017)
[60]
et [57] [60]
al. Figure Ye et
involved al.
involved
adult (2017)
Ye Piedra-
[46] [60]
etadult
involved
volunteers
Figure (2017)
al.heads
involved
2. Ye (2017)
et
adult
volunteers[60]
al. [60]
(2017)
Ye
volunteers et
[39,42–45,48–50,53–61]
adult
involved [60]
al.
volunteers (2017)
Ye
[39,42–45,48–50,53–61]
2.adult
involved et [60]
al.
[39,42–45,48–50,53–61]
volunteers
[39,42–45,48–50,53–61]
(the
involved
adult range volunteers
[39,42–45,48–50,53–61]
(the
involved
adult
of range
the (the range
volunteers
number
[39,42–45,48–50,53–61]
adult
(the
of involved
the range the
volunteers
of
number number
[39,42–45,48–50,53–6
participants
of
adult
(the the of
range
volunteers
[39,42–45,48
number of
participant of(theparti
the
of ra
(2020)
(2021) [61]
(2022) [58] (2022)al. [44]
[58]
(2021)
(2019)
[57]
[61] (2021)
(2022)
(2020)
(2021)
Zhao [61]
[58] et[57]
(2020)
[61]
(2021)
mannequin
al.
Zhao
(2022)
(2022)
2.[57]
Weighted
[61]
mannequin
et
[58]
(2020)
(2021)
heads
al.
[58]
barchart
mannequin
Zhao
Zhao
(2022)
al. [44]Figure
et[57]
[61] al.Weighted
(2020)
(2021)
mannequin
[40,52],
etal.
[58]
(2019) Zhao
(2022)
2. for
etWeighted
[57]
[61]
heads
and the
[40,52],
[58]
Figure
(2020) barchart
mannequin
al.
Zhaoheads risk
human
(2022) [40,52],
and
et barchart
of
[57]
[58]
The al. Weighted
for
bias
Figure
(2020)
(2021)
[40,52], and
cadaver
human
mannequin
Zhao the
and
heads
(2022)
included
for risk
2.
et[57]
[61]
human
and
[58]
barchart
cadaver
heads
[40,52],
al. of
application
the
Figure
Weighted
human
studies’
Figure
risk bias
(2021) 2.
of
cadaver
heads [47].andfor
and
heads 2.
Figure
Weighted
bias
mannequin
cadaver the
Weighted
concerns
barchart
[61]
[40,52], application
However,
humanand
heads
characteristics
risk
2.
[47]. of
Weighted
heads
and barchart
application
for
barchart
of Figure
the
[47].2bias
the
mannequin
However,
cadaver
heads concerns
mannequin
human
are
and
selected
risk
2.
for
barchart
However,
studies
[47]. for
[40,52],
shownheads2application
Weighted
concerns
the bias
However, of
cadaverthe
studies
risk
heads
[48,49]
studies
in
the
for
and2risk
heads
and of
of
[47]. selected
barchart
the the
studies2of
[40,52],
Table heads
humaconce
applica
bias riskbia
sel
[40,
[48,49
How stu
3.
an
(2016) [59] (2018)
Cascon
Fourie
(2016)
Wang et al. Dindaroglu [59] [48]
Zhao
et
(2016)et
al. Zhao
et
Cascon
Wang et al.Wang
al.
[59] Zhao
et
(2016)al.
Zhao
Wang
et al.
[59]
et al.
et
was
3.3.
et
al.
according
(2016)
Wang al.
included 5–80). to
was
[59]
Description
3.3. The
etincluded
Zhao
the Cascon
was
Fourie
5–80).
(2016)
al.Wang
both other
3.3.
Description
of et
according
QUADAS-2
the
included
volunteers
al.
[59]
3.3.
et
both
Zhao
et
according
5–80).
et
was
The6 al.
(2016)
Description
Included
Wang
al.
included
The
toZhao
studiesThe
5–80).
other
Description
of
both
et
the
the
volunteers
and
included
[59]
et
al.
6 et
Zhao
QUADAS-2
guidelines.
to the
other
The
was
were
(2016)
of
Studies
Wang
al.
al.
according
the
Included
3.3.
volunteers
bothmannequin
included
et
QUADAS-2
studies 6
other
5–80). studies’
[59]al.
studies
conducted
Included
Description
of the
et
volunteers
and were
6
Studies Zhao
guidelines.
to
(2016)
Included
al. Wang
and
mannequin
both
the
according
The
wasstudies
included
characteristics
The
Studies
heads. of et
guidelines.
were
conducted
other
5–80).
on
[59]
the
et
mannequin
volunteers
and
al.
QUADAS-2 included
according
wasto
were
Studies
3.3.
al. the
conducted
impression
6 The
Included
mannequin
both
heads.
The
according
studies QUADAS-2
5–80).
Description
included
3.3.
heads.
volunteers
and
studies’
are
guidelines.
according
conducted
onother to
was the
Studies
included on
impression
The
were The
to
casts
6 shown
the
QUADAS-2
5–80).
studies
other
3.3.
Description
mannequin
heads. of
included
characteristics
studies’
QUADAS-2
guidelines.
to
impression
conducted
on of
the
andbothThe
was in
according
the
impression
the
casts
3.3.6 were
Description
Included
included
Table
The
QUADAS-2
studies
other
5–80).
face characteristics
ofstudies’
Description
of
included
mannequin
heads. the
volunteers
included
3.
guidelines.
guidelines.
caststo
conducted
on the
[41,46,51],
the
6
of The
were Among
ofare
impression
Studiescasts
Included
both studies
face
the
both
characteris
shown
QUADAS-2
other
of
heads.
studies’
guidelines.
the face
conducted
of
[41,46,51
Included
the
volunteers
and on
were
the
Studies
voluntee
manneq
the
6 are
imp
Includ
in
cast
stu 23
[41
fac
StucsaA
Ye et al. Ye et al.
[46] Ye (2017)
et al. [60]
Figure (2017)
Ye Ye et2.et
al. [60]
al.
Figure
Weighted (2017)
Dindaroglu
(2017)
Table
Ye et
[46]
Figure
2. [60]
[60]
3.
al. (2017)
involved
2.
Weighted
barchart
Figure Ye et
Weighted
2. [60]
Characteristics
Table al.
for 3.(2017)
adult
barchartCharacteristics
WeightedYeTable
Figure
the et
involved [60]
of
al.3.
volunteers
barchart
riskfor2. the
barchart
ofthe (2017)
Characteristics
included
Table
Table
Ye
adult
involved
for
Weighted
bias
Figure
risk ofet [60]
the
3. 3. studies.
Characteristics
included
[39,42–45,48–50,53–61]
the
andfor
of 2. involved
risk adult
barchart of
application
bias
the
Figure
Weighted Table
of
Characteristics
al. Table
volunteers risk
and the
3. IC: studies.
included
3.
volunteers
bias 2.
of
for impression
Characteristics
Characteristics
Table
involved
adult
and
application
Figure
Weighted
bias of
concerns
barchart
the of3.
andthe
[39,42–45,48–50,53–61]IC:
thestudies.
volunteers
application
risk2. impression
included
cast,
Characteristics
included
Tableof theIC:
[39,42–45,48–50,53–61]
adult
(theinvolved
Weighted
application
of
for
concerns
barchart
of Figure
bias
the
the range of
IPX:
3.
volunteersthe
studies.
impression
iPhone
cast,
studies.
included included
Characteristics
of (thethe
[39,42–45,48–50,53–6
concerns
selected
and
risk
2.
for of
adult
barchart
of Weighted
concernsthe
application
the
the bias ofIPX:
IC:
studies IC:X; impressio
iPhone
cast,
studies.
included
range
volunteersMH:
[39,42–45,48
number
the
selected
risk for
and of
of studies.
impressio
barchart
the of
(the
selected
concern
the
applica
bias IPX
ofMa
IC:
studiethe
of
risk X
stu
th
selra
an i
Zhao Gibelli
(2011) et
(2021)
al.et(2020)
[47] al. (2021)
al.et [61]
(2020) [61]
(2021) (2021)
[61] [61]
al.mannequin mannequin(2021)
al. heads al.
mannequin
(2011) [61]
(2020) (2021)
mannequin
[40,52],
heads
[47] (2021) [61]
heads
and
[40,52], [61]
(2021)
mannequin
heads
human [61]
[40,52],
and (2021)
[40,52], and
cadaver
human
mannequin
heads [61]
human
and cadaver
heads
[40,52],
mannequin
human cadaver
heads [47].and
heads
mannequin
cadaver
[40,52],
However,
human
headsheads
[47]. heads
and [47].
[40,52],
mannequin
However,
cadaver
heads
2Among
human However,
studies
[47].and
[40,52],
heads
However,
2cadaver
human
heads
[48,49]
studies and2studies,
[47]. studies
[40,52],
cadav
heads
huma
[48,49
2How stu
(2022)et[58]
al.
Zhao al.
et al.[59]
Zhao
(2022)
(2016)
al.Zhao
[58] (2022) Zhao
et[58]
(2022) Zhao
[58]
(2022)
et(2016)
al.The
Zhao
et[58]
included
et al.
Zhao
(2022)
The
head;
Zhao
et
Zhao al.the
et
according et
al.
[58]
The
(2016)
et
Zhao
included(2022)
studies’
NA:
al. included
The
head;
al.
Zhao
et
not[58]
et[59]
al.
Zhao
(2022)
characteristics
studies’
included
NA:
al.
to guidelines.
the applicable;
Zhao head;
QUADAS-2
et
studies’
The
not
et[58]al.
NA: Zhao
(2022)
characteristics
studies’
included
are
applicable;
al.to Zhaohead;
head;SPG: shown
not et[58]
characteristics
et
al.
characteristics
studies’
are
applicable;
NA: in shown
The are
Table
stereophotogrammetry.
NA:
head;
al. SPG: head;
notnot shown
characteristics
included
are
3.
in The
Among
Table
shown
stereophotogrammetry.
NA: applicable;
SPG:
NA:
applicable;
head; not not in The
Table
included
studies’
3. in
the
are
stereophotogrammetry.
NA: applicable;
SPG:
applicable;
SPG:
head;
not included
Table
StationaryThe 3.
23
shown Among
characteristics
studies’
studies,
included
3.
the
stereophotogrammetr
applicable;
stereophotogrammetry
NA: SPG: notin
Stationary
SPG: studies’
Among
23 the
Table
characteris
17
Stereophotogra 23
studies’
stereopho
stereophotogr
applicable;
SPG: the
Stationachar
3.
arestu
Stereo Am
ster23 1
s
(2016) [59] (2016)
Fourie
(2018)
[55] et
[49] al. [55](2016) [59]according
3.3. (2016) according
to
[59]
Description [59]the QUADAS-2
(2016)
Fourie
3.3.
of to
[55]
included
according
wasthe
[59]
et
3.3. QUADAS-2
5–80).
(2016)
al.
Description
Included
Description
both
toStudies
according
the
The was
(2016)
of 3.3.
the
volunteers
QUADAS-2
guidelines.
other
of 5–80).
[59]
Included
Description
the 6guidelines.
according
the
was
(2016)
Included
3.3.
and The
studiesQUADAS-2
guidelines.
Studies other
5–80).
[59]
of
Description
mannequin
according
was
the toThe
were
Studies
3.3. the
63.3.
Included QUADAS-2
guidelines.
studies
5–80). according
conducted
other toThe
was
Description
Description
of 3.3.
the the6were
Studies
Included QUADAS-2
5–80).
studies
other
Description
of on guidelines.
to
the according
the
conducted
The
was
impression
ofIPX:the
3.3.
Studies
Included QUADAS-2
6mannequin
were
studies
other
5–80).guidelines.
Included
Description
of the to
on
conducted
Studiesthe
6mannequin
The
were
casts
Included QUADAS-2
guidelines.
impression
studies
Studiesother
of conducted
of
the on
were
the
Studies 6imp
Includstu ca
fac ca
(2017)
Ye et [60] Gibelli
al.(2017) [60]
[45]
et
Yeal.
(2017) et [60]
al. (2017) etYe
Ye (2021) al.etincluded
[60] (2017)
Table
al.Ye
involved
Figure
[61] 3.
et
(2021)
included
both
[60]
al.
involved
adult
Figure
Weighted
[61]
volunteers
Gibelli
(2017)
2.Characteristics
Table Table
3.
Ye etboth
involved included
a1/Portableet
[60]
volunteers
Figure
(2021)
(2021) adult3.
al.
involved
2.
Weighted
barchart
[45]Figure
[61]
[61]
volunteers
al.
(2017)
2.Characteristics
Table Ye
a1the
(2021) 3.and
et
adult
volunteers
2. for
both
[60]
ofCharacteristics
al.
/Portable mannequin
included
(2017)
Characteristics
includedTable
of
barchart
Weighted
[61] Ye the
Stereophotogrammetry
a1
volunteersvolunteers
and
et
[39,42–45,48–50,53–61]
Weighted adult
involved
Figure
the
(2021) 3.[60]
of
al.
/Portable
volunteers
riskfor2.
mannequin
the
studies.
included both
barchart
of
[61] the
included
a1
Weighted
bias
heads.
(2017)
included
Characteristics
ofTable
Stereophotogrammetry IC:
the
Ye
a1 volunteers
and
/Portable
[39,42–45,48–50,53–61]
barchart adultfor
Figure
risk
(2021) and 3.[60]
studies.
a2 mannequin
impression
included
et
[39,42–45,48–50,53–61]
volunteers
for
of
included
a1 both
heads.
studies.
Characteristics
al.
Stereophotogrammetry
/Portable ;2. IC:
ofof
Smartphone
[39,42–45,48–50,53–61]
the (the
involved
risk
[61]barchart
application
bias
the
Weighted the
a1
/Portable
risk
and a2 ;heads.
volunteers
and
studies.IC:
impression
cast, included
a1both
included
Table
Stereophotogrammetry
/Portable
Stereophotogrammetry
range
bias of
for
mannequin
heads.
impression
IPX:
Smartphone IC:
/Portable
[39,42–45,48–50,53–61]
(the
involved
adult
of
and
application
Figure
bias range
the
concerns
barchart
the
volunteers
ofapplication
structured 3.
a2
Stereophotogrammetry
(the and
involved
and
both
impression
iPhone
cast,
the
studies.
volunteers
risk number included
a1cast,
Characteristics
included
Table
Stereophotogrammetry
;2. Smartphone X; heads.
structured
light volunteers
Table
IC:
iPhone
MH:
Stereophotogrammetry
range
adult
(the
of
Weighted/Portable
involved
the
application
of
for
concerns
of Figure
bias
the
the a2
rangea2
;of
adult
Figure ;and
IPX:
cast,
3. both
3.iPhone
impression
studies.
Characteristics
Mannequin
scanner IPX:
Smartphone
the
volunteers
number
concerns
selected
and
risk
2. of
X;
Smartphone
structured
light
2.
barchart
of Weighted
concerns a2
volunteers
number the
MH:
[39,42–45,48–50,53–6
participants
of
adult
(the the
application
the
heads.
b1volunteers
and
Characteristics
iPhone
IC:
Stereophotogramm
;of
range
Weighted
bias of
studies a2X;
included
cast,
Smartphone
volunteers
the
selected
for
and of
manneq
MH:
;impressio
scanner Mannequi
/photogramm
Smartpho
light X;
IPX:
of
structure
structured
[39,42–45,48
number of
participant ofa2 ;head
[39,42
selected
barchart
the Ma
b1MH
the
sca
/ph
Sm
parti
the
of
barch
concern
the
applica
studie
risk
selof
stu
iP
Zhao et al.Zhao et al.
(2011)
Kim
Ross et[47]
al.Ross etZhao
al. (2016)et al.head;ZhaoZhao et
al.al.
ethead; Zhao
(2011)
Ross mannequin
et
et al.
Zhao
[47] etSPG:al.
Zhao mannequin
heads etSPG:[40,52],
al. included
Zhao heads
mannequin and
et al. [40,52],
mannequin
human heads and
mannequin
cadaver
[40,52],
heads human heads
and
[40,52], cadaver
mannequin
heads human [47].and
[40,52], heads
However,
cadaver
human
heads and [47].
[40,52],
cadav
heads
huma
2 Ho
stu
Zhao
(2016)et[59]
al.
Zhao (2018)
et al.
Elbashti
[48]
Zhao
(2016) et
et[59]
al. Zhao
(2016) et[59]
al.according
was Zhao
[59]
3.3.(2016)The NA:
5–80). included
et[59] not
al.The
according
Descriptionto
was
3.3. the head;
NA:
(2018)
ZhaoTablet et
according
QUADAS-2
(2016)
was
5–80).
other
3.3.
Description
of
Elbashti to The 6al.
studies’
NA:
applicable;
head;
not
[48]
al.
Zhao
according
the
[59] (2016)
5–80).
was
The
Description
the3.3.Included
et
included
included
The
connected
Tablet
to
studies characteristics
not
the
QUADAS-2
The
5–80).
other
Description
of the
included
applicable;
NA: et[59]al.
Zhao
connected
to
QUADAS-2
guidelines.
toStudies studies’
applicable;
head;
notTablet
according
of
both
the
6(2016)
other
The
was
werethe
Included
3.3. etThe
QUADAS-2
studies
included
studies’
portableal.
guidelines.
[59]
Included
Description
of the
volunteers
to
6conducted
other
5–80). characteristics
are
astereophotogrammetry.
applicable;
NA: head;
Zhao
connected SPG:
not
aTablet
Tablet
6guidelines.
studies
were
Studies according
the
(2016)
Thestudies
Included
3.3.
both
included
shownet
portable
QUADAS-2 to
guidelines.
were
Studies [59]
conducted
otheron
Description
of
and the al.
dual-structured studies’
characteristics
The included
connected
connected
wasto
were
Studies
3.3.
volunteers
mannequin
included
both
in
aIncluded
the
conducted
impressionThe
studies are
Table
stereophotogrammetry.
stereophotogrammetry.
applicable;
SPG:
NA: 6dual-structured
Tablet portable
Tablet The
characteristics
QUADAS-2 shown
included
stereophotogrammetry.
not Stationary
applicable;
SPG:
head;
light
to
connected
to
guidelines.
5–80).
Description according
conducted
on was
ofincluded
3.3.
andthe
volunteersStudies
included
both
included
studies’
aare
a3.
connected
Tablet on
impression
The
were The
Among
casts
Included
shown
was
5–80).
Description
mannequin
heads. of
in
Stationary
dual-structured
portable
scanner
portable Table
included
guidelines.
to
impression
other of
the
volunteers
andboth
studies’
characteristics
studies’ in
the
according
conducted
on the
The
was
impression
the
casts
3.3.
Studies
Included
included
are
Table
Stationary
stereophotogrammetry.
NA: Stereophotogrammetry
light
connected
to SPG:
head;
tonot The
head; 3.
23
according
QUADAS-2
other The
5–80).
face ofshown
Among
applicable;
casts
Description
of
heads.
mannequin the
volunteers
and
to characteristics
characteristics
studies’
studies,
included
3.
NA:
stereophotogrammetry
Stationary
6dual-structured
b3
a5–80).
Tablet scanner
a;studies
Laser NA:
Stereophotogrammetr
portable
dual-structured
portable tolight
connected
aon scanner
b3
other not
scanner
;the
[41,46,51],
the
6mannequin
The
were Among
are
to
of
impression
Studiescasts
Included
both studies
face in
the the
QUADAS-2
the
other
of
heads.
cTable
characteris
17
shown
Stereophotogra
notapplicable;
SPG:QUADA
guidelines.
studies
conducted
of 23
studies’
Stereopho
Stationary
dual-structured
Laser
dual-structured
portable light
light
6to ;Studies
scanner
b3 the
Structur
dual-stru
aface
scanneare
;applica
portab
[41,46,51
the
volunteers
and were
the
manneqInclud
stu
3.
in
ster
6Laser
scann 23
lig
[41
cast
stu
fac
head casaw
A
c
(2017) [60] (2017)
Gibelli[60]
(2018) et
[50]
[56] al.
(2018) (2017) [60] Table
a1 (2017)
(2017)
/Portable3. [60]
a1 [60] (2017)
Gibelli
Characteristics
Table
/Portable Table
3.
a1 [60]
et
/Portable
Stereophotogrammetry
a1 3. (2017)
al. [60]
Characteristics
Characteristics
Table of the3. (2017)
Characteristics
includedTable
of the
Stereophotogrammetry
Stereophotogrammetry
/Portable [60]
of
Stereophotogrammetry
a1 /Portable
a2 ;risk the
studies.
included
3.
Smartphone (2017)
a2included
Characteristics
ofTable
a1 IC:
thestudies.[60]
impression
included
3.;2.
Stereophotogrammetry
;scanner
/Portablea2
Smartphone studies.
Characteristics
TableIC:
of
Smartphone
structured the
studies.IC:
impression
3.cast,
Stereophotogrammetry
a2 ;ofSmartphone
structured
light a1 impression
Characteristics
Table IPX:
IC:
of
structured
/Portable
scanner
a2 3.
impression
iPhone
cast,
the
studies.
;application
Smartphone
light
structured cast,
Characteristics
included
IPX:
Tableof
X;
light IC:
iPhone
the
MH: IPX:
cast,
3.
/Portable
Stereophotogrammetry
b1scanner
a1 a1
/photogrammetry
/Portable
a2 ;of light iPhone
impression
studies.
included
Characteristics
Mannequin
scanner IPX:
Smartphone
structured
b1 of
X; the
MH:
/photogrammetry iPhone
IC:
b1
Stereophotogramm
scanner X;
studies.
included
cast,MH:
impressio
Mannequi
Stereophotogr
/photogramm
light X;
IPX:
structured
b2 ;the
b1 of
/photog
a2 Ma
IC:
MH
scann
;of the
Smstu
iP i
b
(2021) [61] Gibelli
(2021) [61] et
(2021) al. [61][56] (2021) [61]Figure involved
(2021)2.[61]
mannequin adult
Figure
Weighted
mannequin (2018)
involved
volunteers
Figure
2.
scanner
Gibelli
(2021)
headsmannequin [56]
involved
et
[61]2.
Weighted
barchart
Figure d adult
Weighted
;scanner
al.
(2021)
mannequin
[40,52],
heads Computedfor
[61]
heads
and volunteers
barchart
Weighted
d involved
2.[39,42–45,48–50,53–61]
adult
Figure
the volunteers
barchart
;scanner
(2021)
[40,52], for
Computed
mannequin
heads
human 2.barchart
of
tomography
[61]
[40,52],
and the
d [39,42–45,48–50,53–61]
adult
involved
for
Weighted
bias
Figure
risk
;cadaver
Computed the
andfor
tomography
scanner
(2021)
[40,52], and
human heads
e volunteers
[39,42–45,48–50,53–61]
of
;and
RMSE:
[61]
human
d(the
involved
risk
d ;adult
barchart
application
bias
the
Figure
Weighted
cadaver
heads involved
range
risk
and bias
tomography
Computed
;scanner
Computed
[40,52],
mannequin
human 2.
of
for;[39,42–45,48–50,53–61]
volunteers
involved
adult
root-mean-square
e ; [47].
RMSE:
scanner
cadaver
d
and
heads of
and
application
Figure
Weighted
bias
concerns
barchart
the
d adult
the (the
;and
risk
;tomography
Computed
scanner
mannequin
cadaver root-mean-square
Computed
e
tomography
However,
human
heads RMSE:
heads
d
mannequin
[47]. range
volunteers
2.numbervolunteers
; [39,42–45,48–50,53–61]
adult
(theinvolved
Weighted
application
of
for
concerns
barchart
of Figure
heads bias
the
error
Computed 2range
the
root-mean-square
tomography
scanner
[47].
[40,52],
e
mannequin
However,
cadaver
heads tomography
e ;studiesthe
; volunteers
selected
and
risk
2.
for
RMSE:
RMSE: [39,42–45,48–50,5
barchart
of number
2(the
[39,42–45,48–50,53–6
participants
concerns of
adult
Weighted
concernsthe
application
the
the bias
(surface-to-surface).
d
However,
heads
[47].and error
;tomography
Computed
[40,52],
heads
e of
studies range
volunteers
[39,42–45,48
number
the
selected
risk ;for
and
;root-mean-squa
However, 2of
of
(surface-to-su
e
root-mean-square
RMSE: RMSE:
[40,52],
human
heads
[48,49]
studies and
[47]. of
error ;(the
selected
barchart
tomogra
studies parti
concern
the
applica
bias
studie
root-me
e
[40,52],
cadav
huma of
risk
2RMSE
and
[48,49
How th
sel
root
(sur
stura
an
(2022) [58]
Ye et al.
(2016) [59]
Zhao et al.
(2017) [60]
Children 2022, 9, 1390 6 of 17
Zhao et al.
(2021) [61]

Figure 2.
Figure 2. Weighted
Weighted barchart
barchart for the risk
for the risk of
of bias
bias and
and application
application concerns
concerns of
of the
the selected
selected studies
studies
according to the QUADAS-2 guidelines.
according to the QUADAS-2 guidelines.

3.3. Description of the Included Studies


The included
includedstudies’
studies’characteristics
characteristicsare
areshown
shownininTable 3. 3.
Table Among
Amongthethe
23 23
studies, 17 in-
studies, 17
volved
involvedadult
adultvolunteers
volunteers[39,42–45,48–50,53–61]
[39,42–45,48–50,53–61](the
(therange
rangeofofthe
thenumber
number of of participants
participants
was 5–80). The other 6 studies were conducted on impression
impression casts
casts of
of the
the face
face [41,46,51],
mannequin
mannequin heads [40,52], and human cadaver heads [47]. However, 2 studies [48,49]
heads [40,52], and human cadaver heads [47]. However, 2 studies [48,49]
included
included both
both volunteers
volunteers and
and mannequin
mannequin heads.
heads.

Table 3. Characteristics
Table 3. Characteristics of
of the
the included
included studies.
studies. IC: impression cast, IPX: iPhone X; MH: Mannequin
head;
head; NA:
NA:notnot
applicable; SPG: stereophotogrammetry.
applicable; SPG: stereophotogrammetry. StationaryStationary
Stereophotogrammetry a1 /Portable
Stereophotogrammetry
a2
a1/Portable Stereophotogrammetry a2; Smartphone structured light
Stereophotogrammetry ; Smartphone structured light scanner /photogrammetry b1 b2
scanner /photogrammetry
b1 b2;
; Tablet con-
Tablet connected
nected to adual-structured
to a portable portable dual-structured light ; scanner
light scanner b3 b3; Laser; Structured
Laser scanner c scanner ; Structured
c
light scannerlight
d;
scanner ; Computed
Computed
d
tomography tomography e ; RMSE: root-mean-square
e ; RMSE: root-mean-square error (surface-to-surface).
error (surface-to-surface).

Tested Reference Reference N◦
Measuremen Measurements
Study
Study Aim
Aim Tested Method Sample
Sample Landmark Conclusions
Method Landmarks (Metrics) Conclusions
Method Method ts (Metrics)
To analyze the s
Toaccuracy
analyzeof athe Depth sensor
depth sensor 7 linear Depth sensor smartphone
7 linear smartphonecamera
camera
accuracy of a depth iPhone
smartphone iPhone X
X with 26 distances, may be employed for
Akan et al.
camera with depth-sensing 26 participants a1 distances,
9 3 may be employed
3 angles (mean for 3D facial
3D facial scanning.
Akan
(2021)et[39]
al. sensor smartphone with depth-
b1
participan
(16 F, 103dMD
M) face3dMD face
conventional camera 9 angles (mean scanning. However,
deviations, complex
However, complex
(2021) [39] camera with
stationary sensing ts (16 F, 10 a1
RMSE) structures visualization
stereopho- deviations, structures visualization needs
needs improvements.
conventional camera b1 M)
togrammetry RMSE) improvements.
stationary
Mean ∆x and ∆y
measurements ranged
respectively 10–50 mm
To compare the
and 50–120 mm. The
facial scannings EinScan Pro
records in the z-axis
acquired (EP) c , EinScan
were not possible to
through Pro 2X Plus
∆x, ∆y, and ∆z acquire. EP+ was the
Amornvit et al. 4 scanner (EP+) c , iPhone X Direct
1 MH NA (mean linear most accurate, EP the
(2019) [40] systems with the (IPX) b1 , anthropometry
deviations) intermediate, whereas
measurements Planmeca
IPX and PM showed
obtained with ProMax 3D Mid
less accuracy.
direct (PM) e
Furthermore, EP, IPX,
anthropometry.
and PM were less
accurate in measuring
depths of 2 mm.
Children 2022, 9, 1390 7 of 17

Table 3. Cont.

Reference N◦ Measurements
Study Aim Tested Method Sample Conclusions
Method Landmarks (Metrics)
To evaluate the
All systems were
reliability of
Vivid 910 c , sufficiently reliable.
different Toshiba Surface
Aswehlee et al. Danae 100SP a1 , 3D point Laser-beam,
digitalization 1 IC TOSCANER- deviation
(2018) [41] 3dMD face a1 , clouds light-sectioning
systems for 30000µCM e (RMSE)
Scanify a2 . technology showed the
capturing facial
best accuracy.
defects.
To evaluate the Stereophotogrammetry
Nikon D800 2D 7 linear
reliability of 2D showed the best
camera, distances,
Ayaz et al. photography 50 participants Direct accuracy for facial
Planmeca 22 17 angles (linear
(2020) [42] and 3D scanning (25 M, 25 F) anthropometry digitalization compared
ProFace c , Vectra and angular
systems for facial to 2D photography and
H1 a2 mean deviation)
digitalization. laser scanner.
The measurements
obtained with the
iPhone/iPad
To evaluate the
application were
accuracy of an
significantly correlated
iPad/iPhone iPad/iPhone 21 linear
to the direct
application camera through distances (RMSE,
anthropometric
Chong et al. enabling patients the application Direct mean absolute
20 participants 18 measurements. The
(2021) [43] to capture their anthropometry difference and
3D facial images “MeinXuan” b2 , relative error
iPhone/iPad and Vectra
compared to Vectra H1 a2 measurement)
H1 mean RMSE were
0.08 and 0.67 mm
direct
respectively. The
anthropometry.
subnasale area needs
improvement with the
proposed method.
Stationary
To compare the stereophotogrammetry
accuracy of 3D is a reliable and fast
face scanning system. The
from 3D iPhone Xs smartphone
stereopho- applications showed
equipped with ∆x, ∆y, and ∆z
D’Ettorre et al. togrammetry 40 participants also a good accuracy,
Bellus3D Face or 3dMD face a1 13 (mean linear
(2022) [44] and two different Capture (27 F, 13 M) are portable and less
deviations)
applications in expensive. However,
smartphones applications b1 they need operator
supporting the accuracy and patient
TrueDepth compliance for the
system. incremented time of
acquisition.
To compare the
stereopho-
10 linear
togrammetry 3D
distances,
accuracy with Canon EOS 40D stereophotogrammetry
Dindaroglu et al. 80 participants Direct 6 angles (mean
the direct 2D camera, 15 performed accurate 3D
(2016) [45] (38 M, 42 F) anthropometry linear and
manual and 3dMD face a1 facial images reliable in
angular
digital orthodontics.
deviations)
photogrammetry
methods.
To evaluate the In reference to standard
accuracy of a computed tomography
smartphone imaging, data
application as a Vivid 910 c , Toshiba Surface acquisition with a
Elbashti et al. 3D point
low-cost iPhone 6 b2 1 IC TOSCANER- deviation smartphone for 3D
(2019) [46] clouds
approach for (24 photographs) 30000µCM e (RMSE) modeling is not as
digitizing a facial accurate as
defect for 3D commercially available
modeling. laser scanning.
To estimate the
reliability of
three different
3D scanning
systems namely
Measurements recorded
laser surface 21 linear
Di3D a1 , Vivid by the three 3D systems
Fourie et al. scanning Direct distances (mean
900 c , KaVo 3D 7 cadaver heads 15 appeared to be both
(2011) [47] (Minolta Vivid anthropometry linear
exam e sufficiently accurate
900), CBCT, 3D deviations)
and reliable.
SPG (Di3D
system) and to
compare them to
physical linear
measurements.
Children 2022, 9, 1390 8 of 17

Table 3. Cont.

Reference N◦ Measurements
Study Aim Tested Method Sample Conclusions
Method Landmarks (Metrics)
The low-cost laser
14 linear
To compare the scanner appeared
50 participants distances,
Gibelli et al. accuracy of SPG 17; 3D point sufficiently reliable for
Sense c (10 M, 40 F), Vectra M3 a1 12 angles/
(2018) [48] with a low-cost clouds immovable objects but
1 MH volumes/surfaces
laser scanner. it is not suitable for 3D
(RMSE)
human face scanning.
The portable Vectra H1
face scanning device
proved reliable for
To validate
assessing linear
VECTRA H1 15 linear
measurements, angles,
portable SPG 50 participants distances,
Gibelli et al. a2 a1 12; 3D point and surface areas;
device to verify Vectra H1 (16 M, 34 F), Vectra M3 12 angles/
(2018) [49] clouds conversely, the
its applicability 1 MH volumes/surfaces
influence of involuntary
to 3D facial (RMSE)
facial movements on
analysis.
volumes and RMSE was
higher compared to the
stationary Vectra M3.
To evaluate the
The 3D handheld
accuracy and 25 linear
camera showed high
Kim et al. reliability of a 3D Vectra H1 a2 , 5 participants Direct distances (mean
29 accuracy and reliability
(2018) [50] small-format, Vectra M3 a1 (NA) anthropometry linear
in comparison with
handheld deviations)
traditional models.
camera.
To evaluate
reliability of a
portable low-cost
scanner (Scanify)
for imaging
Scanify showed to be a
facial casts 11 linear
low-cost solution for
Liu et al. compared to a distances (mean
Scanify a2 2 IC (male) Vectra H1 a2 13 facial digitalization but
(2019) [51] previously linear
needs future
validated deviations)
improvements.
portable digital
stereopho-
togrammetry
device
(Vectra H1).
To evaluate the
reliability of
Bellus3D Face Face Camera Pro 8 linear
Both systems showed
Camera Pro Bellus b3 distances,
Liu et al. Direct good accuracy and
compared to (connected to a 1 MH 20 5 angles
(2021) [52] anthropometry precision for clinical
3dMDface tablet), 3dMD (absolute mean
purposes.
stereopho- face a1 deviations)
togrammetry for
face scanning.
To evaluate the Artec EVA showed
Surface
Modabber et al. reliability of two 41 participants greater accuracy
Artec EVA d FaceScan 3D a1 2 lego brick deviation
(2016) [53] devices for face (16 M, 25 F) compared to
(RMSE)
digitalization. FaceScan3D.
Smartphone
To test the use of
photogrammetry
a low-cost iPhone 8S b2 Surface
Nightingale et al. 20 participants 3D point showed to be useful for
scanner by (40, 60 or Artec Spider d deviation
(2020) [54] (11 M, 9 F) clouds novice operators for its
inexperienced 80 photographs) (RMSE)
low cost and easy
operators.
learning curve.
To evaluate the
reliability of Face Camera Pro The new device showed
Piedra- 5 linear distances
Bellus3D Face Bellus b3 10 participants Direct to be clinically reliable
Cascon et al. 6 (precision and
Camera Pro for (connected to a (2 M, 8 F) anthropometry for face scanning
(2020) [55] accuracy RMSE)
face tablet) procedures.
digitalization.
The three protocols with
To estimate the iPhone 7 b2 30-60-90 photographs
performance of (30, 60 or Surface showed a good and
Ross et al. 16 participants
smartphones for 90 photographs), Artec Spider d 6 deviation similar accuracy. The
(2018) [56] (8 M, 8 F)
external ear Intel RealSense (RMSE) Intel RealSense showed
digitalization. Camera SR300 d the worst performance
and resolution.
Children 2022, 9, 1390 9 of 17

Table 3. Cont.

Reference N◦ Measurements
Study Aim Tested Method Sample Conclusions
Method Landmarks (Metrics)
To compare the
accuracy
Landmark-to-
of portable The iPhoneX accuracy
landmark
Rudy et al. stereopho- b1 16 participants a2 10; 3D point stood around 0.5 mm
iPhone X Vectra H1 surface distances;
(2020) [57] togrammetry (NA) cloud when compared to
surface deviation
with iPhone X Vectra H1.
(RMSE)
for facial
digitalization.
iPad Pro 2020 b2
To compare the (Bellus 3D All the systems showed
accuracy of two Dental Pro app), to be quite reliable for
14 linear
Wang et al. portable systems EinScan Pro 2X 20 participants Direct face digitalization.
12 distances
(2022) [58] and a stationary Plus (EP+) c , (5 M, 15 F) anthropometry However, the iPad Pro
(absolute error)
scanner for facial ARC-7 Face 2020 system was the
digitalization. Scanning least accurate.
System d
To assess the
accuracy,
reliability and
reproducibility
Both scanners showed
of facial 21 linear
to be quite accurate,
Ye et al. digitalization 3D CaMega d , 10 participants Direct distances (mean
16 reliable and
(2016) [59] through 3dMDfacea1 (5 M, 5 F) anthropometry linear
reproducible to create
stereopho- deviations)
3D facial models.
togrammetry
compared to a
structured light
scanner.
To evaluate the
accuracy of All scanning systems
Surface
Zhao et al. different 3dMDfacea1 , 10 participants 3D point showed to be
Faro Edge LLP c deviation
(2017) [60] scanning FaceScan3D a1 (NA) clouds sufficiently reliable for
(RMSE)
systems for face clinic purposes.
digitalization.
To evaluate the
accuracy of
stereopho- 3D
Coordinate- 19 linear
togrammetry 3dMDfacea1 , stereophotogrammetry
Zhao et al. 60 participants measuring distances (mean
and a CBCT NewTom 5G 17 was more than CBCT in
(2021) [61] (28 M, 32 F) Machines linear
system to obtain Inc e the acquisition of facial
(CMM) deviations)
facial models deformities.
with deformities
and partitions.

In terms of tested scanning methods, all included studies analyzed stereophotogram-


metry, smartphone scanning, or both. More specifically, thirteen articles tested the accuracy
of stereophotogrammetry [41–43,45,47,49–52,56,59–61], six articles a smartphone structured
light scanner [39,40,43,44,57,58], three articles smartphone photogrammetry [46,54,56],two
articles a portable dual-structured light scanner connected to a tablet [52,55], seven ar-
ticles one or more laser scanners [40–42,46–48,58], three articles a structured light scan-
ner [53,58,59], three articles computed tomography [40,47,61] and two articles 2D camera
photogrammetry [42,45].
Furthermore, the most used reference method (which generated the reference model)
was direct anthropometry [40,42,43,45,47,50,55,58,59], followed by stereophotogramme-
try [39,44,48,49,51–53,57], structured light scanner [54,56] and computed tomography [41,46].
In one paper, a reference model generated by a laser scanner was used [60] and another
study used a coordinate-measuring machine as the gold standard [61].
The facial landmarks considered in the articles ranged from 6 to 29. The linear dis-
tances and angles analyzed ranged, respectively, from 5 to 25 and from 3 to 17. In one
study [53], two lego bricks attached to participants’ faces were employed as a reference
object to measure the scanner accuracy. Most of the included articles measured the global
surface-to-surface deviation between the reference and test images obtained from the
scannings [40,41,46,48,49,53,54,56,57,60].
Children 2022, 9, 1390 10 of 17

4. Discussion
The aim of this review was to investigate the accuracy of smartphone scanners to
generate digital face models compared to stereophotogrammetry. Compared to a previous
systematic review [62] on face scanning, our study specifically analyzed the effectiveness of
modern systems integrated into smartphones, comparing them with stereophotogrammetry.
The selection of the articles was carried out on a greater number of databases (including
gray literature). Furthermore, this systematic review included a greater number of selected
articles (11 vs. 23) and the literature search is updated to August 2022 (vs. May 2020). Fi-
nally, the previous review included four articles concerning the use of smartphones/tablets,
while ours included eleven studies.
Stereophotogrammetry is a passive scanning system that consists of capturing face
surfaces through multiple photoshoots all taken at the same time from different perspectives.
The 3D digital face model consisting of a dense cloud of points is obtained through software
that uses the information of the camera’s position (with set angles and distances) and
camera-to-subject distance [29,63]. This technology is able to reproduce very realistic and
colored face models, but the accuracy is highly dependent on the camera’s resolution and
the light conditions [41,64]. This system requires the application of standardized flash units
to eliminate interference from ambient light and the careful assessment of camera settings
such as brightness level, aperture, and shutter speed [29]. All this requires expensive
investments and specific skills to be acquired.
Thirteen included articles reported the results of stationary stereophotogrammetry scanners
to reproduce 3D facial models [39,41,44,45,47–50,52,53,59–61]. Among these, ten studies con-
cerned the accuracy of stationary stereophotogrammetry systems [39,44,45,47,48,52,53,59–61].
All the articles concluded that this system is really reliable for reproducing digital face mod-
els with an accuracy comparable or superior to other systems such as direct anthropometry,
CMM, laser, and structured light scanners. However, not all stationary stereophotogram-
metry systems showed the same accuracy and the reference method used for comparison
differed through the included articles.
The other three included articles that compared the accuracy of stationary and portable
stereophotogrammetry scanners [41,49,50]. Two studies reported that portable stereopho-
togrammetry scanners (Scanify and Vectra H1) showed high accuracy, but lower compared
to stationary devices (Danae 100SP, 3dMDface, and Vectra M3); moreover, portable scan-
ners suffered more the influence of involuntary facial movements [41,49]. However, in a
similar study conducted by Kim et al. [50], portable Vectra M1 and stationary Vectra M3
yielded similar values. The contrasting results (despite the use of the same devices, in
particular for the study of Gibelli et al. [49]), may be explained by the different number
of participants, anthropometric landmarks, methodology, and statistical analysis adopted.
Therefore, portable stereophotogrammetry scanners showed to be reliable for clinical use,
however, further investigations must be conducted to better understand the limitations
and advantages of handled stereophotogrammetry scanners.
Two other articles evaluated selectively, the accuracy of portable stereophotogramme-
try scanners [42,51]. Dindaroglu et al. [45] reported that Vectra H1 showed higher accuracy
and reliability compared to 2D photography and a laser scanner for the morphological
evaluation of soft tissues. Liu et al. compared a very low-cost stereophotogrammetry
portable scanner (Scanify) to Vectra H1 and the authors concluded that the first device
needs future improvements for clinical application [51].
In the included studies we found that stationary stereophotogrammetry devices
showed a mean accuracy that ranged from 0.087 to 0.860 mm, portable stereophotogram-
metry scanners from 0.150 to 0.849 mm, and smartphones/tablets from 0.460 to 1.400 mm
(2D photogrammetry reported the highest values) (Table 4). A digital face scanner can
be considered:
• highly reliable, if mean accuracy is <1.0 mm,
• reliable, if mean accuracy is 1.0–1.5 mm,
• moderately reliable, if mean accuracy is 1.5–2.0 mm,
Children 2022, 9, 1390 11 of 17

• unreliable, if mean accuracy is >2.0 mm [65].


In clinical practice, facial models with deviations < 1.5 mm can be considered ac-
ceptable [59,60,67]. Therefore, accurate digital face models are reproduced from both
stereophotogrammetry and smartphone scanning technology; however, smartphones seem
to be less accurate as reported in two studies, in particular in measuring depth [40,46].
Therefore, a more inaccurate reproduction of the anatomically complex surfaces of the
face than the flat ones must be expected, and it is not known whether this could imply an
alteration in terms of clinical applications.

Table 4. Results of the scanners’ mean accuracy (expressed in millimeters) in the selected studies.
NA: not available. * SD not reported; 1 Stationary stereophotogrammetry; 2 Portable stereopho-
togrammetry; a smartphone 3D scanner; b 2D photogrammetry; c tablet connected to a portable
dual-structured light scanner; P: photographs.

Stereophotogrammetry Smartphone Structured Light Scanner Laser Scanner 2D Camera


Study
(Mean ± SD) (Mean ± SD) (Mean ± SD) (Mean ± SD) (Mean ± SD)
Akan et al.
- iPhoneX a : 0.753 ± 0.113 - - -
(2022) [39]
EinScan Pro: NA
Amornvit et al.
- iPhoneX a : NA - EinScan Pro 2X -
(2019) [40]
Plus: NA
1
Danae : 0.087 ± 0.002
Aswehlee et al.
3dMDface 1 : 0.123 ± 0.007 - - Vivid 910: 0.068 ± 0.001 -
(2018) [41]
Scanify 2 : 0.849 ± 0.046
Ayaz et al. Planmeca ProFace: Nikon D800 2D
Vectra H1 2 : 0.280 * - -
(2020) [42] 0.500 * camera: 0.780 *
Chong et al.
Vectra H1 2 : NA iPad/iPhone b : NA - - -
(2021) [43]
iPhoneXs b (Bellus3D Face
D’Ettorre et al. App): NA
- - - -
(2022) [44] iPhoneXs b (Capture
App): NA
Dindaroglu et al. Canon EOS 40D
3dMDface 1 : NA - - -
(2016) [45] 2D camera: NA
Elbashti et al. iPhone6 b :
- - Vivid 910: 0.068 ± 0.001 -
(2019) [46] −24P (0.605 ± 0.124)
Fourie et al.
Di3D 1 : 0.860 ± 0.570 - - Vivid 900: 0.890 ± 0.560 -
(2011) [47]
Gibelli et al.
Vectra M3 1 : 0.650 ± 0.120 - - Sense: 0.420 ± 0.170 -
(2018) [48]
Gibelli et al. Vectra H1 2 : 0.440 ± 0.360
- - - -
(2018) [49] Vectra M3 1 : 0.220 ± 0.140
Kim et al. Vectra H1 2 : NA
- - - -
(2018) [50] Vectra M3 1 : NA
Vectra H1 2 : 0.15 ± 0.015
Liu et al. (2019) [51] - - - -
Scanify 2 : 0.740 ± 0.089
Face Camera Pro Bellus c :
Liu et al. (2021) [66] 3dMDface 1 : 0.36 ± 0.20 - - -
0.61 ± 0.47
Modabber et al.
FaceScan 3D 1 : 0.523 ± 0.144 - Artec EVA: 0.228 ± 0.051 - -
(2016) [53]
b
iPhone8S :
Nightingale et al. −40P (0.800 ± 0.200)
- Artec Spider: NA - -
(2020) [54] −60P (0.900 ± 0.400)
−80P (0.800 ± 0.300)
Piedra-Cascon et al. Face Camera Pro Bellus c :
- - - -
(2020) [55] 0.910 ± 0.320
iPhone7 b :
Artec Spider: NA
Ross et al. −30P (1.200 ± 0.300)
- Intel RealSense Camera - -
(2018) [56] −60P (1.200 ± 0.200)
SR300: 1.800 ± 0.300
−90P (1.400 ± 0.600)
Rudy et al.
Vectra H1 2 : NA iPhoneX a : 0.460 ± 0.010 - - -
(2020) [57]
Wang et al. b ARC-7 Face Scanning EinScan Pro 2X Plus:
- iPad Pro 2020 : 1.17 ± 0.80 -
(2022) [58] System: 0.76 ± 0.61 0.69 ± 0.65
Ye et al. (2016) [59] 3dMDface 1 : 0.620 ± 0.390 - 3D CaMega: 0.580 ± 0.370 - -
Zhao et al. 3dMDface 1 : 0.580 ± 0.110
- - Faro LLP: NA -
(2017) [60] FaceScan3D 1 : 0.570 ± 0.070
Zhao et al. 1
3dMD face : NA - - - -
(2021) [61]

At first, the smartphone scanning method was based on a 2D photogrammetry ap-


proach in which the 3D model was produced from the matching of several photographs
from different perspectives through a smartphone app [43,46,54,56]. However, differently
Children 2022, 9, 1390 12 of 17

from smartphone cameras, stereophotogrammetry usually makes use of digital single-lens


reflex cameras with higher ISO settings, better noise reduction software, and higher pixel
densities [68]. Five included studies evaluated smartphone/tablet 2D photogrammetry
using different iPhone/iPad models [43,46,54,56,58]. Elbashti et al. (2019) [46] reported a
good mean accuracy (0.605 ± 0.124 mm) for digitizing an impression cast of a facial defect,
using a 24P (photographs) photogrammetry protocol. However, a commercially available
laser scanner showed higher accuracy (0.068 ± 0.001 mm) in reference to standard com-
puted tomography imaging. In the study of Nightingale et al. (2020) [54] 20 participants’
faces were scanned by novice operators. The reported accuracies for 40P, 60P, and 80P
photogrammetry protocols were 0.800 ± 0.200 mm, 0.900 ± 0.400 mm, and 0.800 ± 0.300,
all values that indicate a reliable accuracy. Ross et al. [56] reported a mean discrepancy
of 1.200 ± 0.300 mm, 1.200 ± 0.200 mm, and 1.400 ± 0.600 mm with 30P, 60P, and 90P
photogrammetry protocols. These higher values, but still <1.5 mm, could be due to the fact
that an anatomically complex structure in the participants (ear) was selectively scanned.
In any case, these results demonstrated that the ear can be scanned quite accurately using
iPhone photographs. Recently, Chong et al. (2021) [43] developed an application enabling
patients to capture their 3D facial images. Compared to a portable stereophotogrammetry
Vectra H1 device, the measurements obtained with the iPhone/iPad application were
also significantly correlated to the direct anthropometric measurements. However, the
authors observed that the subnasale area needs improvement with that method. Similarly,
Wang et al. (2022) [58] detected a sufficient reliability with the use of an app in the iPad Pro
2020 device, however, this system was less accurate compared to a portable laser scanner
and a stationary structured light device.
Recently, to improve scanning accuracy (specifically for anatomically more complex
surfaces), infrared structured-light depth-sensing cameras have been incorporated into
smartphone devices [69]. The working principle of 3D depth-sensing cameras is similar
to professional laser scanners and consists of the time-of-flight technique: the sensor
array detects the time interval for infrared light to travel to the object and return to the
sensor [70–72]. However, professional laser scanning systems have high-tech sensors more
sensitive to depths [46,73]. Four included articles in this review evaluated digital face model
accuracy through iPhone X/iPhone Xs incorporated scanner [39,40,44,57]. Amornvit et al.
(2019) [40] analyzed the face scans obtained from a mannequin head and reported that the
iPhone X obtained the fastest scan (0.57 min, in contrast to 6.7 and 9.4 min of the professional
laser scanners) but it showed less accuracy, in particular in recording depths > 2 mm for
which this technology should be not recommended according to the authors. This may be
due to the failure of passing the light into the depth during scanning. In contrast, in a study
of 16 participants, Rudy et al. (2020) [57] compared the iPhone X scanner to a portable
stereophotogrammetry scanner (Vectra H1) used as a reference method. They reported a
reliable mean accuracy (0.460 ± 0.010 mm) with the iPhone X scanner. The different results
of these two studies may be explained by the different reference methods (respectively,
direct anthropometry and stereophotogrammetry), the different methodology, scanner
involved, and scanned objects (respectively, one mannequin head and sixteen participants).
More recently, Akan et al. (2021) [39], as reported by the study of Amornivit et al., found that
the iPhone X device with a depth-sensing camera is quite accurate compared to stationary
stereophotogrammetry, but complex structures visualization needs improvements. The
most recent included publication on the accuracy of these devices [44] reported a good
reliability of the iPhone Xs with two different applications compared to the 3dMD system.
However, portable smartphones, although less expensive, need operator accuracy and
patient compliance for the incremented time of acquisition.
Finally, external infrared structured-light depth-sensing cameras can be plugged into
smartphones, tablets, or laptop computers [74–77]. Two included studies reported the
use of this approach [52,55]. Piedra-Cascon et al. (2020) [55] evaluated the accuracy of
a dual structured-light scanner connected to a tablet in reproducing 3D facial models
of 10 participants, using direct anthropometry as a reference method. They reported a
Children 2022, 9, 1390 13 of 17

mean accuracy of 0.910 ± 0.320 mm which is considered acceptable for virtual treatment
planning. Similarly, in the recent article of Liu et al. (2021) [52], the Face Camera Pro Bellus
system connected to a tablet and stationary stereophotogrammetry both showed a good
accuracy and precision for clinical purposes compared to direct anthropometry. However,
the use of external structured-light scanners implies that the overall accuracy should be
interpreted as a result that includes the performance of the compatible mobile device,
therefore, the accuracy should be evaluated for each combination of external scanner and
mobile device [62].
One of the main limitations of portable face-scanning systems is motion artifacts
that are induced by involuntary facial movements and showed to be the main source of
error in the results of these scanners [49,77–79]. Therefore, diagnostic accuracy studies on
these devices must be conducted on human living subjects for a correct assessment. The
use of devices that capture the information with a single scan tends to be naturally more
effective for this reason and is especially suitable for children and patients with special
needs (Table 5).

Table 5. Comparison of stationary/portable stereophotogrammetry and smartphone scanning face systems.

Stationary Portable
Parameters Smartphones
Stereophotogrammetry Stereophotogrammetry
Mean accuracy 0.087–0.860 mm 0.150–0.849 mm 0.460–1.400 mm
Costs 8.000–26.000$ 1.500–10.000$ 350–1.200$
Capture time * 1.5–600 ms 3.5–250 ms 2.5–350 ms
Different cameras placed at A single camera capture one A single camera capture one
different angulations capture image at a time. Require image at a time. Require
Approach various 2D images several acquisitions from several acquisitions from
simultaneously to create a 3D different angles to reconstruct different angles to reconstruct
face model. the 3D facial model. the 3D facial model.
Highly reliable system. Useful Comfortable, manageable,
Advantages in children and Reliable and portable system. quite reliable, easy to use,
uncooperative patients. inexpensive.
Expensive, ambient light Motion artifacts, less accurate
Expensive, requires space and control, motion artifacts, on uneven and deep surfaces,
Limitations
ambient light control. greater acquisition time, greater acquisition time,
require patient collaboration. require patient collaboration.
* It refers to a single scan.

Currently, imaging methods and new technologies are moving forward very quickly.
New methods are emerging proposing the integration of target-based close-range pho-
togrammetry and facial landmark machine learning detection through smartphone-based
approaches [36]. Other promising systems are based on the Simultaneous Localization and
Mapping (SLAM) technique and moving camera. This new technology allows the detection
of the object’s real-time position through the creation of 3D point clouds. The advantage of
this approach is the ability to obtain a high-resolution real-time 3D point with a reduced
scanning capture time [80].
This review included only the most recent literature on the topic, regarding publica-
tions subsequent to 2010, because of rapid technological changes, but is limited to English
publications. We observed a great heterogeneity in the adopted methodology for diagnostic
accuracy studies. For example, in most included articles direct anthropometry was the ref-
erence method, thereby limiting the measurements practically only to Euclidean distances.
Some studies were conducted in vitro and therefore the accuracy of reported values may
be overestimated. For research that aims to study the clinical applicability of devices, living
persons must be used to include the possibility of motion artifacts, especially for portable
devices. Some studies employed novice operators without scanning experience, while oth-
Children 2022, 9, 1390 14 of 17

ers did not specify the operators’ experience or used experienced operators. Furthermore,
a diffuse risk of bias in the selected studies was the exclusion of maxillofacial abnormalities
that may result in an overestimation of diagnostic accuracy.

5. Conclusions
All the analyzed devices showed sufficiently reliable accuracy for clinical application.
Stationary stereophotogrammetry scanners, followed by portable ones, showed higher
accuracy than smartphones, particularly in the reproduction of complex anatomies.
Different factors affected the accuracy of facial scanning. Stereophotogrammetry
showed itself to be highly reliable, however, the quality of the 3D images is dependent on
the camera’s resolution, pixel integrity, and the light conditions during photo acquisition.
A direct light may induce a glare effect that affects the quality of the acquisition. Therefore,
positioned flash units and camera settings must be standardized to perform a good image
exposure. On the other hand, smartphones seemed to be less accurate in the acquisition
of irregular face surfaces, but the use of depth-sensing cameras may improve 3D image
quality. Motion artifacts, induced by involuntary movements, may significantly affect facial
scanning accuracy. For this reason, the use of smartphones or other portable scanners may
be less suitable for children and uncooperative patients compared to devices with a single
scan acquisition. However, new technologies involving the integration of machine learning
or SLAM technique and moving camera may be promising to overcome these limitations
and perform higher quality 3D face scannings.
The studies included in the review showed that the use of smartphones and tablets
is currently practicable in the clinic for 3D facial scanning. The big advantages are the
low cost and portability. However, compared to other professional devices, they require
greater attention from the clinician and greater compliance by the patient who must be
able to remain motionless for the entire duration of the acquisition time of the photos from
different perspectives (risk of motion artifacts). The use of devices with internal depth-
sensing cameras or external infrared structured-light depth-sensing cameras plugged into
smartphones/tablets showed higher accuracy compared to the classic 2D photogrammetry
matching approach through the use of some applications. In general, it has been reported
that all devices are quite reliable for clinical practice, however, portable and especially sta-
tionary stereophotogrammetry remains the gold standard for greater accuracy in detecting
deep and irregular surfaces and for shorter acquisition time. Finally, the various limitations
and biases of the included articles may have led to an overestimation of the real accuracy
of these devices.

Author Contributions: A.P. and G.I. selected and read the papers included in this systematic review
and drafted the manuscript. A.L.G. and V.Q. read the papers included in this systematic review,
analyzed and interpreted the data. V.R., C.C., S.S. and R.J.M. analyzed the data and gave scientific
support. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Launonen, A.M.; Vuollo, V.; Aarnivala, H.; Heikkinen, T.; Pirttiniemi, P.; Valkama, A.M.; Harila, V. Craniofacial asymmetry from
one to three years of age: A prospective cohort study with 3d imaging. J. Clin. Med. 2020, 9, 70. [CrossRef] [PubMed]
2. Lo Giudice, A.; Ortensi, L.; Farronato, M.; Lucchese, A.; Lo Castro, E.; Isola, G. The step further smile virtual planning: Milled
versus prototyped mock-ups for the evaluation of the designed smile characteristics. BMC Oral Health 2020, 20, 165. [CrossRef]
[PubMed]
Children 2022, 9, 1390 15 of 17

3. Giudice, A.L.; Spinuzza, P.; Rustico, L.; Messina, G.; Nucera, R. Short-term treatment effects produced by rapid maxillary
expansion evaluated with computed tomography: A systematic review with meta-analysis. Korean J. Orthod. 2020, 50, 314–323.
[CrossRef] [PubMed]
4. Elshewy, M. Assessment of 3D Facial Scan Integration in 3D Digital Workflow Using Radiographic Markers and Iterative Closest
Point Algorithm. Ph.D. Thesis, Marquette University, Milwaukee, WI, USA, 2020.
5. Berssenbrügge, P.; Berlin, N.F.; Kebeck, G.; Runte, C.; Jung, S.; Kleinheinz, J.; Dirksen, D. 2D and 3D analysis methods of facial
asymmetry in comparison. J. Cranio-Maxillofac. Surg. 2014, 42, e327–e334. [CrossRef]
6. Chu, Y.; Yang, J.; Ma, S.; Ai, D.; Li, W.; Song, H.; Li, L.; Chen, D.; Chen, L.; Wang, Y. Registration and fusion quantification of
augmented reality based nasal endoscopic surgery. Med. Image Anal. 2017, 42, 241–256. [CrossRef]
7. Douglas, T.S. Image processing for craniofacial landmark identification and measurement: A review of photogrammetry and
cephalometry. Comput. Med. Imaging Graph. 2004, 28, 401–409. [CrossRef]
8. Mai, H.-N.; Kim, J.; Choi, Y.-H.; Lee, D.-H. Accuracy of Portable Face-Scanning Devices for Obtaining Three-Dimensional Face
Models: A Systematic Review and Meta-Analysis. Int. J. Environ. Res. Public Health 2021, 18, 94. [CrossRef]
9. Plooij, J.M.; Maal, T.J.; Haers, P.; Borstlap, W.A.; Kuijpers-Jagtman, A.M.; Bergé, S.J. Digital three-dimensional image fusion
processes for planning and evaluating orthodontics and orthognathic surgery. A systematic review. Int. J. Oral Maxillofac. Surg.
2011, 40, 341–352. [CrossRef]
10. Joda, T.; Gallucci, G.O. The virtual patient in dental medicine. Clin. Oral Implant. Res. 2015, 26, 725–726. [CrossRef]
11. Cascón, W.P.; de Gopegui, J.R.; Revilla-León, M. Facially generated and additively manufactured baseplate and occlusion rim for
treatment planning a complete-arch rehabilitation: A dental technique. J. Prosthet. Dent. 2019, 121, 741–745. [CrossRef]
12. Kau, C.H.; Richmond, S.; Zhurov, A.; Ovsenik, M.; Tawfik, W.; Borbely, P.; English, J.D. Use of 3-dimensional surface acquisition
to study facial morphology in 5 populations. Am. J. Orthod. Dentofac. Orthop. 2010, 137, S56.e51–S56.e59. [CrossRef]
13. Metzler, P.; Sun, Y.; Zemann, W.; Bartella, A.; Lehner, M.; Obwegeser, J.A.; Kruse-Gujer, A.L.; Lübbers, H.-T. Validity of the 3D
VECTRA photogrammetric surface imaging system for cranio-maxillofacial anthropometric measurements. Oral Maxillofac. Surg.
2014, 18, 297–304. [CrossRef] [PubMed]
14. Nord, F.; Ferjencik, R.; Seifert, B.; Lanzer, M.; Gander, T.; Matthews, F.; Rücker, M.; Lübbers, H.-T. The 3dMD photogrammetric
photo system in cranio-maxillofacial surgery: Validation of interexaminer variations and perceptions. J. Cranio-Maxillofac. Surg.
2015, 43, 1798–1803. [CrossRef] [PubMed]
15. Verhoeven, T.; Xi, T.; Schreurs, R.; Bergé, S.; Maal, T. Quantification of facial asymmetry: A comparative study of landmark-based
and surface-based registrations. J. Cranio-Maxillofac. Surg. 2016, 44, 1131–1136. [CrossRef]
16. Weinberg, S.M.; Naidoo, S.; Govier, D.P.; Martin, R.A.; Kane, A.A.; Marazita, M.L. Anthropometric precision and accuracy of
digital three-dimensional photogrammetry: Comparing the Genex and 3dMD imaging systems with one another and with direct
anthropometry. J. Craniofacial Surg. 2006, 17, 477–483. [CrossRef]
17. Weinberg, S.M.; Kolar, J.C. Three-dimensional surface imaging: Limitations and considerations from the anthropometric
perspective. J. Craniofacial Surg. 2005, 16, 847–851. [CrossRef]
18. Karatas, O.H.; Toy, E. Three-dimensional imaging techniques: A literature review. Eur. J. Dent. 2014, 8, 132. [CrossRef]
19. Ma, L.; Xu, T.; Lin, J. Validation of a three-dimensional facial scanning system based on structured light techniques. Comput.
Methods Programs Biomed. 2009, 94, 290–298. [CrossRef]
20. Li, G.; Wei, J.; Wang, X.; Wu, G.; Ma, D.; Wang, B.; Liu, Y.; Feng, X. Three-dimensional facial anthropometry of unilateral cleft lip
infants with a structured light scanning system. J. Plast. Reconstr. Aesthetic Surg. 2013, 66, 1109–1116. [CrossRef]
21. Vlaar, S.T.; van der Zel, J.M. Accuracy of dental digitizers. Int. Dent. J. 2006, 56, 301–309. [CrossRef]
22. Beuer, F.; Schweiger, J.; Edelhoff, D. Digital dentistry: An overview of recent developments for CAD/CAM generated restorations.
Br. Dent. J. 2008, 204, 505–511. [CrossRef] [PubMed]
23. De Felice, M.E.; Nucci, L.; Fiori, A.; Flores-Mir, C.; Perillo, L.; Grassia, V. Accuracy of interproximal enamel reduction during clear
aligner treatment. Prog. Orthod. 2020, 21, 28. [CrossRef] [PubMed]
24. Gwilliam, J.R.; Cunningham, S.J.; Hutton, T. Reproducibility of soft tissue landmarks on three-dimensional facial scans. Eur. J.
Orthod. 2006, 28, 408–415. [CrossRef] [PubMed]
25. Winder, R.; Darvann, T.A.; McKnight, W.; Magee, J.; Ramsay-Baggs, P. Technical validation of the Di3D stereophotogrammetry
surface imaging system. Br. J. Oral Maxillofac. Surg. 2008, 46, 33–37. [CrossRef]
26. Toma, A.M.; Zhurov, A.; Playle, R.; Richmond, S. A three-dimensional look for facial differences between males and females in a
British-Caucasian sample aged 151/2 years old. Orthod. Craniofacial Res. 2008, 11, 180–185. [CrossRef]
27. Sawyer, A.; See, M.; Nduka, C. Quantitative analysis of normal smile with 3D stereophotogrammetry–an aid to facial reanimation.
J. Plast. Reconstr. Aesthetic Surg. 2010, 63, 65–72. [CrossRef]
28. Knoops, P.G.; Beaumont, C.A.; Borghi, A.; Rodriguez-Florez, N.; Breakey, R.W.; Rodgers, W.; Angullia, F.; Jeelani, N.O.; Schievano,
S.; Dunaway, D.J. Comparison of three-dimensional scanner systems for craniomaxillofacial imaging. J. Plast. Reconstr. Aesthetic
Surg. 2017, 70, 441–449. [CrossRef]
29. Tzou, C.-H.J.; Artner, N.M.; Pona, I.; Hold, A.; Placheta, E.; Kropatsch, W.G.; Frey, M. Comparison of three-dimensional
surface-imaging systems. J. Plast. Reconstr. Aesthetic Surg. 2014, 67, 489–497. [CrossRef]
Children 2022, 9, 1390 16 of 17

30. Dekel, E.; Nucci, L.; Weill, T.; Flores-Mir, C.; Becker, A.; Perillo, L.; Chaushu, S. Impaction of maxillary canines and its effect on the
position of adjacent teeth and canine development: A cone-beam computed tomography study. Am. J. Orthod. Dentofac. Orthop.
2021, 159, e135–e147. [CrossRef]
31. Granata, S.; Giberti, L.; Vigolo, P.; Stellini, E.; Di Fiore, A. Incorporating a facial scanner into the digital workflow: A dental
technique. J. Prosthet. Dent. 2020, 123, 781–785. [CrossRef]
32. Hong, S.-J.; Noh, K. Setting the sagittal condylar inclination on a virtual articulator by using a facial and intraoral scan of the
protrusive interocclusal position: A dental technique. J. Prosthet. Dent. 2020, 125, 392–395. [CrossRef] [PubMed]
33. Russo, L.L.; Salamini, A.; Troiano, G.; Guida, L. Digital dentures: A protocol based on intraoral scans. J. Prosthet. Dent. 2020, 125,
597–602. [CrossRef] [PubMed]
34. Revilla-León, M.; Raney, L.; Piedra-Cascón, W.; Barrington, J.; Zandinejad, A.; Özcan, M. Digital workflow for an esthetic
rehabilitation using a facial and intraoral scanner and an additive manufactured silicone index: A dental technique. J. Prosthet.
Dent. 2020, 123, 564–570. [CrossRef] [PubMed]
35. Liu, C.-H.; Lin, I.-C.; Lu, J.-J.; Cai, D. A Smartphone App for Improving Clinical Photography in Emergency Departments:
Comparative Study. JMIR Mhealth Uhealth 2019, 7, e14531. [CrossRef]
36. Barbero-García, I.; Pierdicca, R.; Paolanti, M.; Felicetti, A.; Lerma, J.L. Combining machine learning and close-range photogram-
metry for infant’s head 3D measurement: A smartphone-based solution. Measurement 2021, 182, 109686. [CrossRef]
37. Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gøtzsche, P.C.; Ioannidis, J.P.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D.
The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions:
Explanation and elaboration. J. Clin. Epidemiol. 2009, 62, e1–e34. [CrossRef]
38. Whiting, P.F.; Rutjes, A.W.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.; Sterne, J.A.; Bossuyt, P.M.
QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann. Intern. Med. 2011, 155, 529–536.
[CrossRef]
39. Akan, B.; Akan, E.; Sahan, A.O.; Kalak, M. Evaluation of 3D Face-Scan images obtained by stereophotogrammetry and smartphone
camera. Int. Orthod. 2021, 19, 669–678. [CrossRef]
40. Amornvit, P.; Sanohkan, S. The Accuracy of Digital Face Scans Obtained from 3D Scanners: An In Vitro Study. Int. J. Environ. Res.
Public Health 2019, 16, 5061. [CrossRef]
41. Aswehlee, A.M.; Elbashti, M.E.; Hattori, M.; Sumita, Y.I.; Taniguchi, H. Feasibility and Accuracy of Noncontact Three-Dimensional
Digitizers for Geometric Facial Defects: An In Vitro Comparison. Int. J. Prosthodont. 2018, 31, 601–606. [CrossRef]
42. Ayaz, I.; Shaheen, E.; Aly, M.; Shujaat, S.; Gallo, G.; Coucke, W.; Politis, C.; Jacobs, R. Accuracy and reliability of 2-dimensional
photography versus 3-dimensional soft tissue imaging. Imaging Sci. Dent. 2020, 50, 15–22. [CrossRef] [PubMed]
43. Chong, Y.M.; Liu, X.Y.; Shi, M.; Huang, J.Z.; Yu, N.Z.; Long, X. Three-dimensional facial scanner in the hands of patients:
Validation of a novel application on iPad/iPhone for three-dimensional imaging. Ann. Transl. Med. 2021, 9, 1115. [CrossRef]
[PubMed]
44. D’Ettorre, G.; Farronato, M.; Candida, E.; Quinzi, V.; Grippaudo, C. A comparison between stereophotogrammetry and
smartphone structured light technology for three-dimensional face scanning. Angle Orthod. 2022, 92, 358–363. [CrossRef]
45. Dindaroglu, F.; Kutlu, P.; Duran, G.S.; Gorgulu, S.; Aslan, E. Accuracy and reliability of 3D stereophotogrammetry: A comparison
to direct anthropometry and 2D photogrammetry. Angle Orthod. 2016, 86, 487–494. [CrossRef] [PubMed]
46. Elbashti, M.E.; Sumita, Y.I.; Aswehlee, A.M.; Seelaus, R. Smartphone Application as a Low-Cost Alternative for Digitizing Facial
Defects: Is It Accurate Enough for Clinical Application? Int. J. Prosthodont. 2019, 32, 541–543. [CrossRef] [PubMed]
47. Fourie, Z.; Damstra, J.; Gerrits, P.O.; Ren, Y. Evaluation of anthropometric accuracy and reliability using different three-
dimensional scanning systems. Forensic Sci. Int. 2011, 207, 127–134. [CrossRef]
48. Gibelli, D.; Pucciarelli, V.; Caplova, Z.; Cappella, A.; Dolci, C.; Cattaneo, C.; Sforza, C. Validation of a low-cost laser scanner
device for the assessment of three-dimensional facial anatomy in living subjects. J. Cranio-Maxillofac. Surg. 2018, 46, 1493–1499.
[CrossRef]
49. Gibelli, D.; Pucciarelli, V.; Cappella, A.; Dolci, C.; Sforza, C. Are portable stereophotogrammetric devices reliable in facial imaging?
A validation study of VECTRA H1 device. J. Oral Maxillofac. Surg. 2018, 76, 1772–1784. [CrossRef]
50. Kim, A.J.; Gu, D.; Chandiramani, R.; Linjawi, I.; Deutsch, I.C.K.; Allareddy, V.; Masoud, M.I. Accuracy and reliability of digital
craniofacial measurements using a small-format, handheld 3D camera. Orthod. Craniofac. Res. 2018, 21, 132–139. [CrossRef]
51. Liu, C.; Artopoulos, A. Validation of a low-cost portable 3-dimensional face scanner. Imaging Sci. Dent. 2019, 49, 35–43. [CrossRef]
52. Liu, J.L.; Zhang, C.H.; Cai, R.L.; Yao, Y.; Zhao, Z.H.; Liao, W. Accuracy of 3-dimensional stereophotogrammetry: Comparison of
the 3dMD and Bellus3D facial scanning systems with one another and with direct anthropometry. Am. J. Orthod. Dentofac. Orthop.
2021, 160, 862–871. [CrossRef] [PubMed]
53. Modabber, A.; Peters, F.; Kniha, K.; Goloborodko, E.; Ghassemi, A.; Lethaus, B.; Holzle, F.; Mohlhenrich, S.C. Evaluation of the
accuracy of a mobile and a stationary system for three-dimensional facial scanning. J. Cranio-Maxillofac. Surg. 2016, 44, 1719–1724.
[CrossRef] [PubMed]
54. Nightingale, R.C.; Ross, M.T.; Allenby, M.C.; Woodruff, M.A.; Powell, S.K. A Method for Economical Smartphone-Based Clinical
3D Facial Scanning. J. Prosthodont. -Implant. Esthet. Reconstr. Dent. 2020, 29, 818–825. [CrossRef] [PubMed]
55. Piedra-Cascon, W.; Meyer, M.J.; Methani, M.M.; Revilla-Leon, M. Accuracy (trueness and precision) of a dual-structured light
facial scanner and interexaminer reliability. J. Prosthet. Dent. 2020, 124, 567–574. [CrossRef] [PubMed]
Children 2022, 9, 1390 17 of 17

56. Ross, M.T.; Cruz, R.; Brooks-Richards, T.L.; Hafner, L.M.; Powell, S.K.; Woodruff, M.A. Comparison of three-dimensional surface
scanning techniques for capturing the external ear. Virtual Phys. Prototyp. 2018, 13, 255–265. [CrossRef]
57. Rudy, H.L.; Wake, N.; Yee, J.; Garfein, E.S.; Tepper, O.M. Three-Dimensional Facial Scanning at the Fingertips of Patients and
Surgeons: Accuracy and Precision Testing of iPhone X Three-Dimensional Scanner. Plast. Reconstr. Surg. 2020, 146, 1407–1417.
[CrossRef]
58. Wang, C.; Shi, Y.F.; Xiong, Q.; Xie, P.J.; Wu, J.H.; Liu, W.C. Trueness of One Stationary and Two Mobile Systems for Three-
Dimensional Facial Scanning. Int. J. Prosthodont. 2022, 35, 350–356. [CrossRef]
59. Ye, H.Q.; Lv, L.W.; Liu, Y.S.; Liu, Y.S.; Zhou, Y.S. Evaluation of the Accuracy, Reliability, and Reproducibility of Two Different 3D
Face-Scanning Systems. Int. J. Prosthodont. 2016, 29, 213–218. [CrossRef]
60. Zhao, Y.J.; Xiong, Y.X.; Wang, Y. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation
Method for Facial Scanner Accuracy. PLoS ONE 2017, 12, e0169402. [CrossRef]
61. Zhao, Z.; Xie, L.; Cao, D.; Izadikhah, I.; Gao, P.; Zhao, Y.; Yan, B. Accuracy of three-dimensional photogrammetry and cone
beam computed tomography based on linear measurements in patients with facial deformities. Dentomaxillofac. Radiol. 2021,
50, 20200001. [CrossRef]
62. Mai, H.-N.; Lee, D.-H. Accuracy of Mobile Device–Compatible 3D Scanners for Facial Digitization: Systematic Review and
Meta-Analysis. J. Med. Internet Res. 2020, 22, e22228. [CrossRef] [PubMed]
63. Volonghi, P.; Baronio, G.; Signoroni, A. 3D scanning and geometry processing techniques for customised hand orthotics: An
experimental assessment. Virtual Phys. Prototyp. 2018, 13, 105–116. [CrossRef]
64. Camison, L.; Bykowski, M.; Lee, W.W.; Carlson, J.C.; Roosenboom, J.; Goldstein, J.A.; Losee, J.E.; Weinberg, S.M. Validation of the
Vectra H1 portable three-dimensional photogrammetry system for facial imaging. Int. J. Oral Maxillofac. Surg. 2018, 47, 403–410.
[CrossRef]
65. Aung, S.; Ngim, R.; Lee, S. Evaluation of the laser scanner as a surface measuring tool and its accuracy compared with direct
facial anthropometric measurements. Br. J. Plast. Surg. 1995, 48, 551–558. [CrossRef]
66. Liu, L.; Zhan, Q.; Zhou, J.; Kuang, Q.; Yan, X.; Zhang, X.; Shan, Y.; Li, X.; Lai, W.; Long, H. Effectiveness of an anterior mini-screw
in achieving incisor intrusion and palatal root torque for anterior retraction with clear aligners. Angle Orthod. 2021, 91, 794–803.
[CrossRef]
67. Secher, J.J.; Darvann, T.A.; Pinholt, E.M. Accuracy and reproducibility of the DAVID SLS-2 scanner in three-dimensional facial
imaging. J. Cranio-Maxillofac. Surg. 2017, 45, 1662–1670. [CrossRef]
68. Lane, C.; Harrell, W., Jr. Completing the 3-dimensional picture. Am. J. Orthod. Dentofac. Orthop. 2008, 133, 612–620. [CrossRef]
69. Yao, H.; Ge, C.; Xue, J.; Zheng, N. A high spatial resolution depth sensing method based on binocular structured light. Sensors
2017, 17, 805. [CrossRef]
70. Sarbolandi, H.; Lefloch, D.; Kolb, A. Kinect range sensing: Structured-light versus Time-of-Flight Kinect. Comput. Vis. Image
Underst. 2015, 139, 1–20. [CrossRef]
71. Jia, T.; Zhou, Z.; Gao, H. Depth measurement based on infrared coded structured light. J. Sens. 2014, 2014, 852621. [CrossRef]
72. Alfaro-Santafé, J.; Gómez-Bernal, A.; Lanuza-Cerzócimo, C.; Alfaro-Santafé, J.V.; Pérez-Morcillo, A.; Almenar-Arasanz, A.J.
Three-axis measurements with a novel system for 3D plantar foot scanning: iPhone X. Footwear Sci. 2020, 12, 123–131. [CrossRef]
73. Kovacs, L.; Zimmermann, A.; Brockmann, G.; Baurecht, H.; Schwenzer-Zimmerer, K.; Papadopulos, N.A.; Papadopoulos, M.A.;
Sader, R.; Biemer, E.; Zeilhofer, H.-F. Accuracy and precision of the three-dimensional assessment of the facial surface using a 3-D
laser scanner. IEEE Trans. Med. Imaging 2006, 25, 742–754. [CrossRef]
74. Koban, K.C.; Cotofana, S.; Frank, K.; Green, J.B.; Etzel, L.; Li, Z.; Giunta, R.E.; Schenck, T.L. Precision in 3-dimensional surface
imaging of the face: A handheld scanner comparison performed in a cadaveric model. Aesthetic Surg. J. 2019, 39, NP36–NP44.
[CrossRef] [PubMed]
75. Maués, C.; Casagrande, M.; Almeida, R.; Almeida, M.; Carvalho, F. Three-dimensional surface models of the facial soft tissues
acquired with a low-cost scanner. Int. J. Oral Maxillofac. Surg. 2018, 47, 1219–1225. [CrossRef]
76. Timen, C.; Speksnijder, C.M.; Van Der Heijden, F.; Beurskens, C.H.; Ingels, K.J.; Maal, T.J. Depth accuracy of the RealSense F200:
Low-cost 4D facial imaging. Sci. Rep. 2017, 7, 16263.
77. Koban, K.C.; Perko, P.; Etzel, L.; Li, Z.; Schenck, T.L.; Giunta, R.E. Validation of two handheld devices against a non-portable
three-dimensional surface scanner and assessment of potential use for intraoperative facial imaging. J. Plast. Reconstr. Aesthetic
Surg. 2020, 73, 141–148. [CrossRef]
78. Bakirman, T.; Gumusay, M.U.; Reis, H.C.; Selbesoglu, M.O.; Yosmaoglu, S.; Yaras, M.C.; Seker, D.Z.; Bayram, B. Comparison of
low cost 3D structured light scanners for face modeling. Appl. Opt. 2017, 56, 985–992. [CrossRef]
79. Perillo, L.; Padricelli, G.; Isola, G.; Femiano, F.; Chiodini, P.; Mataresei, G. Class II malocclusion division 1: A new classification
method by cephalometric analysis. Eur. J. Paediatr. Dent. 2012, 13, 192.
80. Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Constr. 2018, 89, 38–48.
[CrossRef]

You might also like