Professional Documents
Culture Documents
Full Chapter Stochastic Approaches To Electron Transport in Micro and Nanostructures Mihail Nedjalkov Ivan Dimov Siegfried Selberherr PDF
Full Chapter Stochastic Approaches To Electron Transport in Micro and Nanostructures Mihail Nedjalkov Ivan Dimov Siegfried Selberherr PDF
https://textbookfull.com/product/sustainable-approaches-to-urban-
transport-1st-edition-dinesh-mohan/
https://textbookfull.com/product/advances-in-high-performance-
computing-results-of-the-international-conference-on-high-
performance-computing-borovets-bulgaria-2019-ivan-dimov/
https://textbookfull.com/product/nanostructures-for-drug-
delivery-a-volume-in-micro-and-nano-technologies-ecaterina-
andronescu-and-alexandru-mihai-grumezescu-eds/
https://textbookfull.com/product/nanostructures-for-cancer-
therapy-a-volume-in-micro-and-nano-technologies-alexandru-mihai-
grumezescu-and-anton-ficai-eds/
Nanostructures for Antimicrobial Therapy. A volume in
Micro and Nano Technologies Anton Ficai And Alexandru
Mihai Grumezescu (Eds.)
https://textbookfull.com/product/nanostructures-for-
antimicrobial-therapy-a-volume-in-micro-and-nano-technologies-
anton-ficai-and-alexandru-mihai-grumezescu-eds/
https://textbookfull.com/product/mechanical-behaviors-of-carbon-
nanotubes-theoretical-and-numerical-approaches-a-volume-in-micro-
and-nano-technologies-kim-meow-liew/
Modeling and Simulation in Science,
Engineering and Technology
Mihail Nedjalkov
Ivan Dimov
Siegfried Selberherr
Stochastic Approaches
to Electron Transport
in Micro- and
Nanostructures
Modeling and Simulation in Science,
Engineering and Technology
Series Editors
Nicola Bellomo Tayfun E. Tezduyar
Department of Mathematical Sciences Department of Mechanical Engineering
Politecnico di Torino Rice University
Torino, Italy Houston, TX, USA
Stochastic Approaches
to Electron Transport
in Micro- and Nanostructures
Mihail Nedjalkov Ivan Dimov
Institute for Microelectronics, Faculty Institute for Information
of Electrical Engineering and Information and Communication Technologies
Technology, Technische Universität Wien Bulgarian Academy of Sciences
Wien, Austria Sofia, Bulgaria
Institute for Information
and Communication Technologies
Bulgarian Academy of Sciences
Sofia, Bulgaria
Siegfried Selberherr
Institute for Microelectronics, Faculty
of Electrical Engineering and Information
Technology, Technische Universität Wien
Wien, Austria
Mathematics Subject Classification: 45B05, 45D05, 37M05, 8108, 6008, 60J85, 60J35, 65Z05, 65C05,
65C35, 65C40
This book is published under the imprint Birkhäuser, www.birkhauser-science.com, by the registered
company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
v
vi Preface
afterward. The inverse perspective, algorithms from the transport model, developed
during the last 15 years of the twentieth century, gave rise to a universal approach
based on the formal application of the numerical Monte Carlo theory on the integral
form of the transport model. This is the Iteration Approach, which allows to
unify the existing algorithms as particular cases as well as the derivation of novel
algorithms with refined properties. These are the high precision algorithms based on
backward evolution in time and algorithms with improved statistics based on event
biasing, which stimulates the generation of rare events. As applied to the problem
of self-consistent coupling with the Poisson equation, the approach gave rise to
self-consistent event biasing and the concept for time-dependent particle weights.
An important feature of the Iteration Approach is that the original model can be
reformulated within the context of a Monte Carlo analysis in a way that allows for
a novel improved model of the underlying physics.
The era of nanoelectronics involves novel quantum phenomena, involving
quantities with phases and amplitudes that give rise to resonance and interference
effects. This dramatically changes the computational picture.
• Quantum phenomena cannot be described as a cumulative sum of probabilities
and thus by phenomenological particle models.
• They demand an enormous increase of the computational requirements and need
efficient algorithms. (1) A little difference in the input settings can lead to very
different solutions and (2) the so-called sign problem of quantum computations
causes that the evaluated values are often results of cancellation of large numbers
with different signs.
• The involved physical phenomena resulting from a complicated interplay
between quantum coherence and processes of decoherence due to interaction
with the environment are not well understood. As a consequence, the
mathematical models are still in the process of development and thus the
corresponding algorithms. A synergistic relationship between model and method
is therefore of crucial importance.
A promising strategy is the application of the Iteration Approach in conjunction
with the Wigner formulation of quantum mechanics. This is the formalism of
choice, since many concepts and notions of classical mechanics, like phase space
and distribution function, are retained. A seamless transition between quantum and
classical descriptions is provided, which allows an easy identification of coherence
effects. The considered system of electrons interacting with the lattice vibrations
(phonons) can be formally described by an infinite set of linked equations. A
hierarchy of assumptions and approximations is necessary to close the system and to
make it numerically accessible. Depending on these assumptions, different quantum
effects are retained in the derived hierarchy of mathematical models. The homo-
geneous Levinson and Barker-Ferry equations have been generalized to account
for the spatial electron evolution in quantum wires. A Monte Carlo Backward
algorithm has been derived and implemented to reveal a variety of quantum
effects, like the lack of energy conservation during electron-phonon interaction, the
intra-collisional field effect, and the ultra-fast spatial transfer. Numerical analysis
Preface vii
shows an exponential increase of the variance of the method with the evolution
time, associated to the non-Markovian character of the physical evolution. Further
approximations give rise to the Wigner-Boltzmann equation, where the spatial
evolution is entirely quantum mechanical, while the electron-phonon interaction is
classical. The application of the Iteration Approach to the adjoint equation leads to
a fundamental particle picture. Quantum particles retain their classical attributes
like position and velocity but are associated with novel attributes like weight
that carries the quantum information. The weight changes its magnitude and sign
during the evolution according to certain rules, and two particles meeting in the
phase space can merge into one with their weights. Two algorithms, the Wigner
Weighted and the Wigner Generation algorithms, are derived for the case
of stationary problems determined by the boundary conditions. The latter algorithm
refines the particle attributes by replacing the weight with a particle sign, so that
now particles are generated according to certain rules and can also annihilate each
other. These concepts have been generalized during the last decade for the transient
Wigner-Boltzmann equation, where also a discrete momentum is introduced. The
corresponding Signed-Particle algorithm allowed the computation of multi-
dimensional problems. It has been shown that the particle generation or annihilation
process in the discrete momentum space is an alternative to Newtonian acceleration.
Furthermore, the concepts of signed particles allow for an equivalent formulation of
the Wigner quantum mechanics and thus to interpret and understand the involved
quantum processes. Recently, the signed-particle approach has been adapted to
study entangled systems, many body effects in atomic systems, neural networks, and
problems involving the density functional theory. However, such application aspects
are beyond the scope of this book. Instead, we focus on important computational
aspects such as convergence of the Neumann series expansion of the Wigner
equation, the existence and uniqueness of the solution, numerical efficiency, and
scalability.
A key message of the book is that the enormous success of quantum particle
algorithms is based on computational experience accumulated in the field for more
than 50 years and rooted in the classical Monte Carlo algorithms.
This book is divided into three parts.
The introductory Part I is intended to establish the concepts from statistical
mechanics, solid-state physics, and quantum mechanics in phase space that are
necessary for the formulation of mathematical models. It discusses the role and
problems of modeling semiconductor devices, introduces the semiconductor proper-
ties, phase space and trajectories, and the Boltzmann and Wigner equations. Finally,
it presents the foundations of Monte Carlo methods for the evaluation of integrals,
solving integral equations, and reformulating the problem with the use of the adjoint
equation.
Part II considers the development of Monte Carlo algorithms for classical
transport. It follows the historical layout, starting with the Monte Carlo
Single-Particle and Ensemble algorithms. Their generalization under
the Iteration Approach, its application to weak signal analysis, and the derivation of
a general self-consistent Monte Carlo algorithm with weights for the mixed problem
viii Preface
with initial and boundary conditions are discussed in detail. The development and
application of the classical Monte Carlo algorithms is formulated with the help of
seven asserts, ten theorems, and thirteen algorithms.
Part III is dedicated to quantum transport modeling. The derivation of a hierarchy
of models ranging from the generalized Wigner equation for the coupled electron-
phonon system to the classical Boltzmann equation is presented. The development
of respective algorithms on the basis of the Iteration Approach gives rise to
an interpretation of quantum mechanics in terms of particles that carry a sign.
Stationary and transient algorithms are presented, which are unified by the concepts
of the signed-particle approach. Particularly interesting is the Signed-Particle
algorithm suitable for transient transport simulation. This part is based on five
theorems and three algorithms, which in particular shows that stochastic modeling
of quantum transport is still in an early stage of development as compared to the
classical counterpart. Auxiliary material and details concerning the three parts are
given in the Appendix.
The targeted readers are from the full range of professionals and students with
pertinent interest in the field: engineers, computer scientists, and mathematicians.
The needed concepts and notions of solid-state physics and probability theory
are carefully introduced with the aim at a self-contained presentation. To ensure
a didactic perspective, the complexity of the algorithms raises consecutively.
However, certain parts of specific interest to experts are prepared to provide a stand-
alone read.
The introductory Part I aims to present the engineering, physical, and mathematical
concepts and notions needed for the modeling of classical and quantum transport
of current in semiconductor devices. It contains four chapters: “Concepts of Device
Modeling” considers the role of modeling, the basic modules of device modeling,
and the hierarchy of transport models which describe at different levels of physical
complexity the electron evolution, or equivalently, the electron transport process.
This process is based on the fundamental characteristics of an electron, imposed by
the periodic crystal lattice where it exists. The lattice affects the electron evolution
also by different violations of the periodicity, such as impurity atoms and atom
vibrations (phonons). Basic features of crystal lattice electrons and their interaction
with the phonons are presented in the next chapter “The Semiconductor Model:
Fundamentals”. The third chapter “Transport Theories in Phase Space” is focused
on the classical and quantum transport theory, deriving the corresponding equations
of motion that determine the physical observables. In Chapter 4, we introduce the
basic notions of the numerical Monte Carlo theory that utilizes random variables
and processes for evaluation of sums, integrals, and integral equations. The needed
concepts of the probability theory are presented in the Appendix. We stick to a
top-level description to introduce to nonexperts in the field and to underline the
mutual interdependence of the physical and mathematical aspects. Looking for a
self-contained presentation, we need sometimes to refer to text placed further in
the sequel. Our opinion is that this is a better alternative to many references to the
specialized literature in the involved fields.
Part II considers the Monte Carlo algorithms for classical carrier transport
described by the Boltzmann equation, beginning with the first approaches for mod-
eling of stationary or time-dependent problems in homogeneous semiconductors.
The homogeneous phase space is represented by the components of the momentum
variable, which simplifies the physical transparency of the transport process.
The first algorithms, the Monte Carlo Single-Particle and the Ensemble
algorithms, are derived by phenomenological considerations and thus are perceived
as emulation of the natural processes of drift and scattering determining the carrier
distribution in the momentum space. Later, these algorithms are generalized for the
ix
x Introduction to the Parts
inhomogeneous task, where the phase space is extended by the spatial variable.
In parallel, certain works evolve the initially intuitive link between mathematical
and physical aspects of these stochastic algorithms by proving that they provide
solutions of the corresponding Boltzmann equations.
The next generation algorithms are already devised with the help of alternative
formulations of the transport equation. These are algorithms for statistical enhance-
ment based on trajectory splitting, of a backward (back in time) evolution and of
a trajectory integral. It appears that these algorithms have a common foundation:
They are generalized by the Iteration Approach, where a formal application of
the numerical Monte Carlo theory to integral forms of the Boltzmann equation
allows to devise any of these particular algorithms. This approach gives rise to
a novel class of algorithms based on event biasing or can be used to novel
transport problems such as for small signal analysis discussed in Chap. 7. The
computation of important operational device characteristics imposes the problem
of stationary inhomogeneous transport with boundary conditions. The Monte Carlo
Single-Particle algorithm used for this problem, initially devised by phe-
nomenological considerations and by an assumption for ergodicity of the system, is
now obtained by the Iteration Approach, and it is shown that the ergodicity follows
from the stationary physical conditions: Namely, it is shown that the average over
an ensemble of nonequilibrium, but macroscopically stationary system of carriers,
can be replaced by a time average over a single carrier trajectory. From a physical
point of view, the place of the boundaries is well defined by the device geometry.
From a numerical point of view, an iteration algorithm (but not its performance)
should be independent of the choice of place and shape of the boundaries. An
analysis is presented, showing that the simulation in a given domain provides such
boundary conditions in an arbitrary subdomain and that if the latter are used in a
novel simulation in the subdomain, the obtained results will coincide with those
obtained from the primary simulation.
The most general transport problem, determined by initial and boundary con-
ditions, is considered at the end of Part II. The corresponding random variable is
analyzed and the variance evaluated. Iteration algorithms for statistical enhancement
based on event biasing are derived. Finally, the self-consistent coupling scheme of
these algorithms with the Poisson equation is derived.
Part III is devoted to the development of stochastic algorithms for quantum
transport, which is facilitated by the experience accumulated from the classical
counterpart. The same formal rules for application of the Monte Carlo theory hold
for the quantum case; in particular, an integral form of the transport equation is
needed. Furthermore, a maximal part of the kernel components must be included
in the transition probabilities needed for construction of the trajectories. The pecu-
liarity now is that there is no probabilistic interpretation of the kernel components,
inherent for classical transport models. Quantum kernels are in principle oscillatory.
The way to treat them is to decompose the kernel into a linear combination of
positive functions and then to associate the corresponding signs to the random
variable. This is facilitated by the analogy between the classical and the quantum
(Wigner) theories and the fact that most of the concepts and notions remain valid
Introduction to the Parts xi
xiii
xiv Contents
Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
A.1 Correspondence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
A.2 Physical Averages and the Wigner Function . . . . . . . . . . . . . . . . . . . . . . . . 192
A.3 Concepts of Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
A.4 Generating Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
A.5 Classical Limit of the Phonon Interaction. . . . . . . . . . . . . . . . . . . . . . . . . . . 201
A.6 Phonon Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
A.7 Forward Semi-Discrete Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Part I
Aspects of Electron Transport Modeling
Chapter 1
Concepts of Device Modeling
silicon. The MOSFET (Metal Oxide Semiconductor Field Effect Transistor) has
been established as a basic circuit element and the field effect as fundamental
principle of operation of electronic structures [7]. Since the very beginning, circuit
manufacturers have been able to pack more and more transistors onto a single
silicon chip. The following facts illustrate the revolutionary onset of the process of
integration: (1) A chip developed in 1962 has 8 transistors; (2) the chips developed
each next year from 1963 to 1965 are with 16, 32 and 64 transistors respectively [8].
MOS memories appeared after 1968, the first microprocessor in 1971, and 10 years
later there are already computers based on large scale integration (LSI) circuits.
Gordon Moore [9], one of the founders of Intel, observed in accordance with (1)
and (2) that the number of transistors was doubling every year and anticipated in
1965 that this rate of growth will continue at least a decade. Ten years later, looking
at the next decade he revised his prediction to doubling at a 2 years base. This
tendency continued for more than 40 years, so that today the most complex silicon
chips have 10 billion transistors—a billion times increase of the transistor density
for the period.
The dimensions of the transistors shrink accordingly. Advances of the lithogra-
phy, that is to say the accuracy of patterning, evolves at steps called technology
nodes. The latter refers to the size of the transistors in a chip or half of the typical
distance between two identical circuit elements. The technology node of 100 nm
and below started to be admissible for the industry around 2005. The Intel Pentium
D processor, one of the early workhorses for desktop computers, is based on a
90 nm technology, while during the last decade lithographic processes at 45, 22
and 11 nm became actual [10]. The wide accepted definition of nanotechnology
involves critical dimensions below 100 nm, so that we entered the nanoera of the
semiconductor technology during the first decade of the twenty-first century.
New architectures are needed as the dimensions of transistors are scaled down
to keep the same conventional functions and compensate for new phenomena
appearing as the thickness of the gate decreases. Accordingly, the structure of
the IC elements gets more complicated aiming to maintain the performance and
functionality. The planar in design complementary metal–oxide–semiconductor
(CMOS) device has been suggested in 1963 as an ultimate solution for ICs [11].
By the early 1970s CMOS transistors gradually became the dominant device type
for many integrated electronic applications [8]. The third dimension of the device
design was added to the planar technology in 1979 with the three-dimensional (3D)
CMOS transistor pair, called also CMOS hamburger [11].
Shrinking the sizes raises the influence of certain physical phenomena on the
operational characteristics of the transistors. Among them are the so-called ’short
channel’ effects which impact negatively the electrical control. Novel 3D structures,
FinFETs also known as Tri-gate transistors, have been developed at the end of the
century to reduce short channel effects. The active region of current flow, called
channel, is a thin semiconductor (Si, or a high mobility material like Ge or a III-
V compound) “fin” surrounded by the gate electrode. This allows for both a better
electrostatic control and for a higher drive current. The FinFET architecture became
the primary transistor design under the 22 nm technology node. Here however
1.1 About Microelectronics 5
on simulation and modeling, which shows their importance for global physical
analysis.
In particular, their growing significance for the semiconductor industry can be
traced back in time in the published since 1992 roadmap [15], where traditionally
exists a chapter on modeling and simulation. Further details are given in the next
section.
We summarize the factors which move forward the need of development and
refinement of modeling approaches for the semiconductor electronics.
• Raising importance for economy and society give rise to a rapid development.
Currently MOSFET based VLSI circuits give about 90% contribution to the
market of semiconductor electronics. The mutual tendency for reducing the
price/functionality ratio has been attained by mainly reducing the dimensions
of the circuit elements.
• The implementation of a smaller technology node requires novel equipment
which is related to considerable investments. Typically, the manufacturing plant
cost doubles every 3 years or so [10]. Currently the costs exceed 10 billion
dollars, which gives one dollar per transistor—a ratio characterizing the novel
technologies.
• The cycles of production become shorter with every new IC generation. The
transition from design to mass production (time-to-market) must be shortened
under the condition that the time for fabrication of a wafer composed of
integrated circuits increases with the increase of the complexity of the involved
processes imposed by the miniaturization. Thus the device specifications must be
close to the optimal ones in order to reduce the laboratory phase of preparation.
Here modeling is needed to provide an initial optimization.
• The cost of the involved resources increases with the complexity of the IC’s,
which makes the application of any standard trial-error experiments impossible
or at least very difficult.
Modeling in microelectronics is divided in topics which reflect the steps in
the organization of given IC’s: (1) simulation of the technological processes for
formation and linking of the circuit elements; (2) simulation of the operation
of the individual devices; (3) simulation of the operation of the whole circuit
which typically comprises a huge number of devices. Process simulation (1) treats
different physical processes such as material deposition, oxidation, diffusion, ion
implantation, and epitaxy to provide information about the dimensions, material
composition, and other physical characteristics, including the deviation and variabil-
ity from the ideal (targeted) device parameters. Device simulation (2) analyzes the
electrical behavior of a given device determined by the desired operating conditions
and the transport of the current through it. This information is needed at level
1.3 Modeling of Semiconductor Devices 7
(3) for a development of compact models which can describe the behavior of a
complete circuit within CAD packages such as [16] for IC design. Device modeling
plays a central role in this hierarchy by focusing on the physical processes and
phenomena, which in accordance with the material and structural characteristics
determine the current transport and thus the electrical behavior of a given device.
Device modeling relies on mathematical, physical, and engineering approaches to
describe this behavior, which, determines the electrical properties of the final circuit.
Nowadays semiconductor companies reduce cost and time-to-market by using a
methodology called Design Technology Co-Optimization (DTCO), which considers
the whole line from the design of the individual device and the involved material,
process, and technology specifications to the functionality of the corresponding
circuits, systems, and even products. DTCO establishes the link between circuit
performance and the transistor characteristics by circuit to technology computer-
aided design simulations necessary to evaluate the device impact on specific
circuits and systems, which are the target of the technology optimization. In
particular modeling of new materials, new patterning techniques, new transistor
architectures, and the corresponding compact models are used to consistently link
device characteristics with circuit performance. In this way the variability sources
which characterize the production flow of advanced technology nodes and impacts
the final product functionality are consistently taken into account [17].
On the device level variability originates from lithography, other process imper-
fections (global variability), and from short range effects such as discreteness of
charges and granularity of matter (local or statistical variability). Device simulations
which take into account variability are already above four-dimensional: The degrees
of freedom, introduced by the statistical distribution of certain device parameters,
add to the three-dimensional device design. The simulation of a single, nominal
device must be replaced by a set of simulations of a number of devices featured by
e.g. geometry variations (line edge roughness) and granularity (e.g. random dopant
distributions). This adds a considerable additional simulation burden and raises
the importance of refined simulation strategies. Often the choice of the simulation
approach is determined by a compromise between physical comprehension and
computational efficiently.
the carriers n and their flux J. The latter are input quantities for the electromagnetic
module, which, in accordance with the Maxwell equations determine the electric,
E, and magnetic H, fields. The corresponding forces accelerate the carriers and thus
govern their dynamics. Output quantities are the current-voltage (IV) characteristics
of the device, the densities of the carriers, current and their mean energy and
velocity.
For a wide range of physical conditions, especially valid for semiconductors, the
Maxwell equations can be considered in the electrostatic limit. In what follows we
assume that there is no applied magnetic field so that, in a scalar potential gauge, the
electromagnetic module is reduced to the Poisson equation for the electric potential
V . The equation is considered in most textbooks [2], so that we omit any details.
For the purposes of this book it is sufficient to consider the module a black box
with input parameters being the carrier concentration and the boundary conditions
given by the terminal potentials and/or their derivatives, and the spatial distribution
of the electric potential as an output quantity. We focus on the transport module and
in particular on the transport models, unified in the hierarchical structure shown on
Fig. 1.1.
The hierarchy of transport models reflects the evolution of the field so that their
presentation is in accordance with the evolution of microelectronics. These models
are relevant for particular physical conditions. In general the spatial and temporal
scales of carrier transport decrease in vertical direction. At the bottom are the
analytical models valid for large dimensions and low frequencies characterizing
1.3 Modeling of Semiconductor Devices 9
the infancy of microelectronics.1 For example the 1971 year Intel 4004 processor
comprises 2300 transistors based on 10µ silicon technology with a frequency
of 400 kHz. The increased physical complexity of the next generations devices
becomes too complicated for an analytical description. Relevant become models
based on the drift-diffusion equation, which requires a numerical treatment. The
increase of the operating frequency imposed a system of differential equations,
known as the hydrodynamic model. These models can be derived from phenomeno-
logical considerations, or from the leading transport model, the Boltzmann equation,
under the assumption of a local equilibrium of the carriers. Further scaled devices,
operate at the sub-micrometer scale, where the physical conditions challenge the
assumption of locality. Nonequilibrium effect brought by ballistic and hot electrons
impose to search for ways of solving the Boltzmann equation itself. The equation
represents the most comprehensive model, describing carrier transport in terms
of the classical mechanics. In contrast to the previous models, which utilize
macroscopic parameters, the equation describes the carriers on a microscopic level
by involving the concept of a distribution function in a phase space. The current
carriers are point-like particles, accelerated over Newtonian trajectories by the
electromagnetic forces - a process called drift. The drift is interrupted by local in
space and time scattering processes which change the particle momentum and thus
the trajectory. The scattering, caused by lattice imperfections such as vibrations,
vacations, and impurities, is described by functions, which, despite calculated by
means of quantum mechanical considerations have a probabilistic meaning. As a
result the equation is very intuitive and physically transparent so that it can be
derived by using phenomenological considerations similarly to the hydrodynamic
and drift-diffusion counterparts.
The simulation approaches developed to solve the Boltzmann equation distin-
guish the golden era of device modeling. A stable farther reduction of device
dimensions has been achieved with their help. For example the introduced in 2006
Intel Core2 brand contains single-, dual- and quad-core processors which are based
on a 45 nm technology and strained silicon, contain of order of 109 transistors and
reach an operating frequency of 3 GHz.
The nanometers scale of the active regions of nowadays devices and the
terahertz scale of electromagnetic radiation reached manifest the beginning of the
nanoera of semiconductor electronics. The physical limits of materials, key concepts
and technologies associated with the revolutionary development of the field are
now approached. Novel effects and phenomena due to granularity of the matter,
finite size of the carriers, finite time of scattering, ultra-fast evolution and other
phenomena which are beyond the Boltzmann model of classical transport arise.
Quantum processes begin to dominate the carrier kinetics and need a corresponding
1 Itshould be noted that analytic models are widely used even now, however, for simulation
of complete circuits comprised of huge number of transistors. These are the so-called compact
models, partially developed by using transistor characteristics, provided by device simulations and
measurements.
10 1 Concepts of Device Modeling
2 Deterministicapproaches to the problem have been developed recently [19, 20]. They rely on the
power of the modern computational platforms to compute the distribution function in the cases in
which a high precision is needed.
1.3 Modeling of Semiconductor Devices 11
the field. This enables a powerful tool for analysis of different processes and
phenomena, development of novel concepts and structures and design of novel
devices. In parallel, already 40 years are devoted for refinement and development
of novel algorithms which to meet the challenges posed by the development of the
physical models. One of our goals is to follow the thread from the first intuitive
algorithms, devised by phenomenological considerations to the formal iteration
approach which unifies the existing algorithms and provides a platform for devising
of novel ones.
The conformity between the physical and numerical aspects characterizing the
classical models is lacking for the models in the upper, quantum part of Fig. 1.1.
The reasons for this are associated with the fact that this area of device modeling
is relatively young and still under development, the variety of representations of
quantum mechanics, the physical and numerical complexity, and the lack of physical
transparency which does not allow development of intuitive approaches. Thus, an
universal quantum transport model is missing along with a corresponding numerical
method for solving it.
The NonEquilibrium Green’s Functions (NEGF) formalism provides the most
comprehensive physical description and thus is widely used in almost all fields of
physics, from many body theory to seismology. Introduced by methods of many
body perturbation theory[21], the theory has been re-introduced starting from the
one-electron Schrödinger equation [22] in a convenient for device modeling terms,
so that it soon became the approach of choice of a vast number of researchers in the
nanoelectronic community. The formalism accounts for both spatial and temporal
correlations as well as decoherence processes such as interaction with lattice vibra-
tions (phonons). In general there are two position and two time coordinates, which
indicates a non-Markovian evolution. A deterministic approach to the model, based
on a recursive algorithm, allows the simulation of two-dimensional problems (four
spatial variables) in the case of ballistic transport, where processes of decoherence
are neglected [23].
The numerical burden characterizing two-dimensional problems can be further
reduced by a variable decomposition which discriminates a transport direction, by
assuming a homogeneous in the normal to the transport direction potential. The
inclusion of processes of decoherence like interaction with phonons at different
levels of approximation, [24, 25] seriously impedes the computational process. The
same holds true for self-consistent coupling with the Poisson equation. In general
the NEGF formalism provides a feasible modeling approach for near-equilibrium
transport in near ballistic regimes, when the coupling with the sources of scattering
is weak.
The next level of description simplifies the physical picture on the expense of the
correlations in time. The unitary equivalents, linked by a Fourier transform density
matrix and Wigner function formalisms, maintain a Markovian evolution in time,
which makes them convenient for the treatment of time dependent problems. The
former is characterized by two spatial coordinates, while in the latter one of the
spatial coordinates is replaced by a momentum and thus is known as ‘quantum
mechanics in a phase space’.
12 1 Concepts of Device Modeling
Fish.
TO CHOOSE FISH.
The cook should be well acquainted with the signs of freshness and
good condition in fish, as they are most unwholesome articles of
food when stale, and many of
them are also dangerous eating
when they are out of season.
The eyes should always be
bright, the gills of a fine clear
red, the body stiff, the flesh firm,
yet elastic to the touch, and the
smell not disagreeable.
When all these marks are
reversed, and the eyes are
sunken, the gills very dark in Copper Fish or Ham Kettle.
hue, the fish itself flabby and of
offensive odour, it is bad, and
should be avoided. The chloride of soda, will, it is true, restore it to a
tolerably eatable state,[42] if it be not very much over-kept, but it will
never resemble in quality and wholesomeness fish which is fresh
from the water.
42. We have known this applied very successfully to salmon which from some
hours’ keeping in sultry weather had acquired a slight degree of taint, of
which no trace remained after it was dressed; as a general rule, however,
fish which is not wholesomely fresh should be rejected for the table.
Let this be always done with the most scrupulous nicety, for
nothing can more effectually destroy the appetite, or disgrace the
cook, than fish sent to table imperfectly cleaned. Handle it lightly,
and never throw it roughly about, so as to bruise it; wash it well, but
do not leave it longer in the water than is necessary; for fish, like
meat, loses its flavour from being soaked. When the scales are to be
removed, lay the fish flat upon its side and hold it firmly with the left
hand, while they are scraped off with the right; turn it, and when both
sides are done, pour or pump sufficient water to float off all the loose
scales; then proceed to empty it; and do this without opening it more
than is absolutely necessary for the purposes of cleanliness. Be sure
that not the slightest particle of offensive matter be left in the inside;
wash out the blood entirely, and scrape or brush it away if needful
from the back-bone. This may easily be accomplished without
opening the fish so much as to render it unsightly when it is sent to
table. When the scales are left on, the outside of the fish should be
well washed and wiped with a coarse cloth, drawn gently from the
head to the tail. Eels to be wholesome should be skinned, but they
are sometimes dressed without; boiling water should then be poured
upon them, and they should be left in it from five to ten minutes
before they are cut up. The dark skin of the sole must be stripped off
when it is fried, but it should be left on it like that of a turbot when the
fish is boiled, and it should be dished with the white side upwards.
Whitings are skinned before they are egged and crumbed for frying,
but for boiling or broiling, the skin is left on them. The gills of all fish
(the red mullet sometimes excepted), must be taken out. The fins of
a turbot, which are considered a great delicacy, should be left
untouched; but those of most other fish must be cut off.
TO KEEP FISH.
We find that all the smaller kinds of fish keep best if emptied and
cleaned as soon as they are brought in, then wiped gently as dry as
they can be, and hung separately by the head on the hooks in the
ceiling of a cool larder, or in the open air when the weather will allow.
When there is danger of their being attacked by flies, a wire safe,
placed in a strong draught of air, is better adapted to the purpose.
Soles in winter will remain good for two days when thus prepared;
and even whitings and mackerel may be kept so without losing any
of their excellence. Salt may be rubbed slightly over cod fish, and
well along the back-bone; but it injures the flavour of salmon, the
inside of which may be rubbed with vinegar and peppered instead.
When excessive sultriness renders all of these modes unavailing,
the fish must at once be partially cooked to preserve it, but this
should be avoided if possible, as it is very rarely so good when this
method is resorted to.
TO SWEETEN TAINTED FISH.
Put a small bit of saltpetre with the salt into the water in which it is
boiled: a quarter of an ounce will be sufficient for a gallon.
TO KNOW WHEN FISH IS SUFFICIENTLY BOILED, OR
OTHERWISE COOKED.
If the thickest part of the flesh separates easily from the back-
bone, it is quite ready to serve, and should be withdrawn from the
pan without delay, as further cooking would be injurious to it. This
test can easily be applied to a fish which has been divided, but when
it is entire it should be lifted from the water when the flesh of the tail
breaks from the bone, and the eyes loosen from the head.
TO BAKE FISH.