You are on page 1of 3

TMTT-2023-07-0955 Review

Authors essentially propose a neural input space mapping (SM) technique to enhance compact
models of HEMTs including trapping/de-trapping effects. In particular, they claim using deep
neural networks instead of classical multi-layer perceptrons, and they apply the resultant neural
input SM-based model to perform variability analysis of AlGaN/GaN HEMTs considering
trapping/de-trapping effects. As a fine model, they use TCAD simulations, which are able to
predict the variability caused by the trap-related parameters. As a coarse model, they use a
standard compact model with no trap-related parameters.

In my opinion, authors are following a quite well-established space mapping approach to


modeling, by using a neural input space mapping technique, as originally described in reference
[43]. However, in the Introduction, authors claim that they are proposing “a novel space mapping
technique”, which in my opinion is false. I believe that the novelty of their work does not lie in
the space mapping technique used, but in the application of a well-established SM approach to
model HEMTs to perform variability studies including trapping and de-trapping effects.

Authors should also consider the following reference, where a neural input space mapping
formulation is proposed to perform not only statistical analysis but also yield optimization, and
hence it is closely related to the proposed manuscript:

J. W. Bandler, J. E. Rayas-Sánchez, and Q. J. Zhang, “Yield-driven electromagnetic


optimization via space mapping-based neuromodels,” Int. J. RF and Microwave CAE,
vol. 12, no. 1, pp. 79-89, Jan. 2002.

Authors might also want to consider citing the following reference, where a brief description of
the evolution of neural space mapping approaches to statistical analysis and yield optimization is
presented (including neural output space mapping):

J. W. Bandler and J. E. Rayas-Sánchez, “An early history of optimization technology for


automated design of microwave circuits,” IEEE J. of Microwaves, vol. 3, no. 1, pp. 319-
337, Jan. 2023.

Authors claim in the Abstract and in the Introduction that the proposed approach is more
computationally efficient than a pure or “black-box” neural modeling approaching, which of
course is to be expected, since they are exploiting the prior knowledge embedded in the compact
model. They also confirm that claim in Section IV, which as I said, is to be expected. However, I
believe that authors should better demonstrate that the proposed approach using deep ANNs is
computationally more efficient (or more accurate for the same amount of training data) than
using a conventional multilayer perceptron (or “shallow” ANN). Deep ANNs typically require
much more training data sets than “shallow” ANNs, so one would expect that training a neural
input SM-based model with conventional ANNs would be more efficient than with deep ANNs.
Authors should clarify this apparent contradiction.

When authors describe equation (8) in Section III.B, they refer to that mapping as “space
mapping” and cite references [24], [25] and [41]-[43]. The first two references are not directly
related to space mapping, so they should not be cited there. Additionally, to be more precise,
equation (8) corresponds the classical input space mapping, but presented with a much more
complicated notation, i.e., equation (8) is equivalent to Xc = P(Xf), where Xc and Xf are the
coarse and fine model parameters, respectively, and P is the corresponding mapping. Instead of
citing references [24] and [25] there, authors could cite more comprehensive and recent
references about input space mapping, such as the following:

J. E. Rayas-Sánchez, “Power in simplicity with ASM: tracing the aggressive space


mapping algorithm over two decades of development and engineering applications,”
IEEE Microwave Magazine, vol. 17, no. 4, pp. 64-76, Apr. 2016.

S. Koziel, Q. S. Cheng, and J. W. Bandler, “Space mapping,” IEEE Microw. Mag., vol. 9,
no. 6, pp. 105-122, Dec. 2008.

The formulation presented in Section III.B, and illustrated in Fig. 4 for three different domains,
is referred by the authors as deep learning space mapping augmented compact modeling (or
DSMAC models). In my opinion, it corresponds exactly to the neural input space mapping
approach described in reference [43] for modeling, and in the first reference I proposed in this
review, used for statistical analysis and yield optimization. The only difference I can see, is in
the usage of deep ANNs to train the mapping, instead of conventional ANNs (either multilayer
feedforward ANNs for frequency domain and DC, or recurrent ANNs for transient domain),
which is questionable from the computational efficiency point of view, as I explained above.

The numerical results presented in Section IV make sense to me. The conventional or black-box
neural modeling approach is of course more computationally expensive than the neural input
space mapping approach, and once developed, both models are much faster than the original T-
CAD models. However, there are two points that are unclear to me about the numerical results
presented:
1) The first one is that authors do not present a statistical analysis of the resultant neural
SM-based model versus the statistical analysis using the T-CAD simulations. I would
expect such analysis and comparison since authors are considering that the model
parameters are subject to random or statistical fluctuations within some tolerance region,
assuming independent uniform probability distribution functions, as presented in equation
(1). Authors should clarify this aspect.
2) Based on the results presented in Table III, it seems to me that authors are actually using
conventional 3-layer perceptrons, that is, conventional or “shallow” ANNs to implement
the mapping between the coarse and fine models, since I cannot see any indication of
using actual deep ANNs (no complex ANN topology, no ReLU activation functions, no
convolutional-type ANNs, etc.). This should also be clarified by the authors.

Based on all the above, I believe that authors should consider modifying the paper title to make
more consistent with its contents.

I consider that the proposed manuscript can be significantly abbreviated. For instance, Section
II.C could be almost eliminated since it describes the classical or black-box neural modeling
formulation for which many references are available. Section III.C can also be significantly
abbreviated.

You might also like