You are on page 1of 54

Stochastic Approaches to Electron

Transport in Micro and Nanostructures


Mihail Nedjalkov Ivan Dimov Siegfried
Selberherr
Visit to download the full and correct content document:
https://textbookfull.com/product/stochastic-approaches-to-electron-transport-in-micro-
and-nanostructures-mihail-nedjalkov-ivan-dimov-siegfried-selberherr/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Sustainable Approaches to Urban Transport 1st Edition


Dinesh Mohan

https://textbookfull.com/product/sustainable-approaches-to-urban-
transport-1st-edition-dinesh-mohan/

Advances in High Performance Computing: Results of the


International Conference on “High Performance
Computing” Borovets, Bulgaria, 2019 Ivan Dimov

https://textbookfull.com/product/advances-in-high-performance-
computing-results-of-the-international-conference-on-high-
performance-computing-borovets-bulgaria-2019-ivan-dimov/

Nanostructures for Drug Delivery. A volume in Micro and


Nano Technologies Ecaterina Andronescu And Alexandru
Mihai Grumezescu (Eds.)

https://textbookfull.com/product/nanostructures-for-drug-
delivery-a-volume-in-micro-and-nano-technologies-ecaterina-
andronescu-and-alexandru-mihai-grumezescu-eds/

Nanostructures for Cancer Therapy. A volume in Micro


and Nano Technologies Alexandru Mihai Grumezescu And
Anton Ficai (Eds.)

https://textbookfull.com/product/nanostructures-for-cancer-
therapy-a-volume-in-micro-and-nano-technologies-alexandru-mihai-
grumezescu-and-anton-ficai-eds/
Nanostructures for Antimicrobial Therapy. A volume in
Micro and Nano Technologies Anton Ficai And Alexandru
Mihai Grumezescu (Eds.)

https://textbookfull.com/product/nanostructures-for-
antimicrobial-therapy-a-volume-in-micro-and-nano-technologies-
anton-ficai-and-alexandru-mihai-grumezescu-eds/

Nanostructures for Novel Therapy. Synthesis,


Characterization and Applications. A volume in Micro
and Nano Technologies Alexandru Mihai Grumezescu And
Denisa Ficai (Eds.)
https://textbookfull.com/product/nanostructures-for-novel-
therapy-synthesis-characterization-and-applications-a-volume-in-
micro-and-nano-technologies-alexandru-mihai-grumezescu-and-
denisa-ficai-eds/

Theory and Modeling of Cylindrical Nanostructures for


High-Resolution Spectroscopy. A volume in Micro and
Nano Technologies Stefano Bottacchi And Francesca
Bottacchi (Auth.)
https://textbookfull.com/product/theory-and-modeling-of-
cylindrical-nanostructures-for-high-resolution-spectroscopy-a-
volume-in-micro-and-nano-technologies-stefano-bottacchi-and-
francesca-bottacchi-auth/

Metal Semiconductor Core-Shell Nanostructures for


Energy and Environmental Applications. A volume in
Micro and Nano Technologies Raju Kumar Gupta And
Mrinmoy Misra (Eds.)
https://textbookfull.com/product/metal-semiconductor-core-shell-
nanostructures-for-energy-and-environmental-applications-a-
volume-in-micro-and-nano-technologies-raju-kumar-gupta-and-
mrinmoy-misra-eds/

Mechanical Behaviors of Carbon Nanotubes. Theoretical


and Numerical Approaches. A volume in Micro and Nano
Technologies Kim Meow Liew

https://textbookfull.com/product/mechanical-behaviors-of-carbon-
nanotubes-theoretical-and-numerical-approaches-a-volume-in-micro-
and-nano-technologies-kim-meow-liew/
Modeling and Simulation in Science,
Engineering and Technology

Mihail Nedjalkov
Ivan Dimov
Siegfried Selberherr

Stochastic Approaches
to Electron Transport
in Micro- and
Nanostructures
Modeling and Simulation in Science,
Engineering and Technology

Series Editors
Nicola Bellomo Tayfun E. Tezduyar
Department of Mathematical Sciences Department of Mechanical Engineering
Politecnico di Torino Rice University
Torino, Italy Houston, TX, USA

Editorial Board Members


Kazuo Aoki Petros Koumoutsakos
National Taiwan University Computational Science and Engineering
Taipei, Taiwan Laboratory
ETH Zürich
Yuri Bazilevs Zürich, Switzerland
School of Engineering
Brown University Andrea Prosperetti
Providence, RI, USA Cullen School of Engineering
University of Houston
Mark Chaplain Houston, TX, USA
School of Mathematics and Statistics
University of St. Andrews K. R. Rajagopal
St. Andrews, UK Department of Mechanical Engineering
Texas A&M University
Pierre Degond
College Station, TX, USA
Department of Mathematics
Imperial College London Kenji Takizawa
London, UK Department of Modern Mechanical
Andreas Deutsch Engineering
Center for Information Services Waseda University
and High-Performance Computing Tokyo, Japan
Technische Universität Dresden Youshan Tao
Dresden, Sachsen, Germany Department of Applied Mathematics
Livio Gibelli Donghua University
Institute for Multiscale Thermofluids Shanghai, China
University of Edinburgh Harald van Brummelen
Edinburgh, UK Department of Mechanical Engineering
Miguel Ángel Herrero Eindhoven University of Technology
Departamento de Matemática Aplicada Eindhoven, Noord-Brabant, The Netherlands
Universidad Complutense de Madrid
Madrid, Spain
Thomas J. R. Hughes
Institute for Computational Engineering
and Sciences
The University of Texas at Austin
Austin, TX, USA

More information about this series at http://www.springer.com/series/4960


Mihail Nedjalkov • Ivan Dimov •
Siegfried Selberherr

Stochastic Approaches
to Electron Transport
in Micro- and Nanostructures
Mihail Nedjalkov Ivan Dimov
Institute for Microelectronics, Faculty Institute for Information
of Electrical Engineering and Information and Communication Technologies
Technology, Technische Universität Wien Bulgarian Academy of Sciences
Wien, Austria Sofia, Bulgaria
Institute for Information
and Communication Technologies
Bulgarian Academy of Sciences
Sofia, Bulgaria

Siegfried Selberherr
Institute for Microelectronics, Faculty
of Electrical Engineering and Information
Technology, Technische Universität Wien
Wien, Austria

ISSN 2164-3679 ISSN 2164-3725 (electronic)


Modeling and Simulation in Science, Engineering and Technology
ISBN 978-3-030-67916-3 ISBN 978-3-030-67917-0 (eBook)
https://doi.org/10.1007/978-3-030-67917-0

Mathematics Subject Classification: 45B05, 45D05, 37M05, 8108, 6008, 60J85, 60J35, 65Z05, 65C05,
65C35, 65C40

© Springer Nature Switzerland AG 2021


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This book is published under the imprint Birkhäuser, www.birkhauser-science.com, by the registered
company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

Computational modeling is an important subject of microelectronics, which is the


only alternative to expensive experiments for design and characterization of the
basic integrated circuit elements, the semiconductor devices. Modeling comprises
mathematical, physical, and electrical engineering approaches needed to compute
the electrical behavior in these structures. The role of the computational approach
is twofold: (1) to derive models describing the current transport processes in a
given structure in terms of governing equations, initial and boundary conditions,
and relevant physical quantities, and (2) to derive efficient numerical approaches for
performing their evaluation. They are considered as the two sides of the problem,
since physical comprehension is achieved on the expense of increased numerical
complexity. Stochastic approaches are most widely used in the field in particular,
because they reduce memory requirements on the expense of longer simulation time
and avoid approximation procedures, requiring additional regularity.
The primary motivation of our book is to present the synergistic link between the
development of mathematical models of current transport in semiconductor devices
and the emergence of stochastic methods for their simulation. The book fills the
gap between other monographs, which focus on the development of the theory and
the physical aspects of the models [1, 2], their application [3, 4], and the purely
theoretical Monte Carlo approaches for solving Fredholm integral equations [5].
Specific details about this book are given in the following.
The golden era of classical microelectronics is characterized by models based on
the Boltzmann transport equation. Their physical transparency in terms of particles
featured the widespread development of Monte Carlo methods in the field. At
the beginning, almost 50 years ago, a variety of phenomenological algorithms,
such as the Monte Carlo Single-Particle algorithm for stationary transport
in the presence of boundary conditions, the Monte Carlo Ensemble algorithm
relevant for transient processes with initial or boundary conditions, and Monte
Carlo algorithms for small signal analysis, were derived using the probabilistic
interpretation of the processes described. Thus, the stochastic method is viewed
as a simulated experiment, which emulates the elementary processes in the electron
evolution. The fact that these algorithms solve the transport equation was proved

v
vi Preface

afterward. The inverse perspective, algorithms from the transport model, developed
during the last 15 years of the twentieth century, gave rise to a universal approach
based on the formal application of the numerical Monte Carlo theory on the integral
form of the transport model. This is the Iteration Approach, which allows to
unify the existing algorithms as particular cases as well as the derivation of novel
algorithms with refined properties. These are the high precision algorithms based on
backward evolution in time and algorithms with improved statistics based on event
biasing, which stimulates the generation of rare events. As applied to the problem
of self-consistent coupling with the Poisson equation, the approach gave rise to
self-consistent event biasing and the concept for time-dependent particle weights.
An important feature of the Iteration Approach is that the original model can be
reformulated within the context of a Monte Carlo analysis in a way that allows for
a novel improved model of the underlying physics.
The era of nanoelectronics involves novel quantum phenomena, involving
quantities with phases and amplitudes that give rise to resonance and interference
effects. This dramatically changes the computational picture.
• Quantum phenomena cannot be described as a cumulative sum of probabilities
and thus by phenomenological particle models.
• They demand an enormous increase of the computational requirements and need
efficient algorithms. (1) A little difference in the input settings can lead to very
different solutions and (2) the so-called sign problem of quantum computations
causes that the evaluated values are often results of cancellation of large numbers
with different signs.
• The involved physical phenomena resulting from a complicated interplay
between quantum coherence and processes of decoherence due to interaction
with the environment are not well understood. As a consequence, the
mathematical models are still in the process of development and thus the
corresponding algorithms. A synergistic relationship between model and method
is therefore of crucial importance.
A promising strategy is the application of the Iteration Approach in conjunction
with the Wigner formulation of quantum mechanics. This is the formalism of
choice, since many concepts and notions of classical mechanics, like phase space
and distribution function, are retained. A seamless transition between quantum and
classical descriptions is provided, which allows an easy identification of coherence
effects. The considered system of electrons interacting with the lattice vibrations
(phonons) can be formally described by an infinite set of linked equations. A
hierarchy of assumptions and approximations is necessary to close the system and to
make it numerically accessible. Depending on these assumptions, different quantum
effects are retained in the derived hierarchy of mathematical models. The homo-
geneous Levinson and Barker-Ferry equations have been generalized to account
for the spatial electron evolution in quantum wires. A Monte Carlo Backward
algorithm has been derived and implemented to reveal a variety of quantum
effects, like the lack of energy conservation during electron-phonon interaction, the
intra-collisional field effect, and the ultra-fast spatial transfer. Numerical analysis
Preface vii

shows an exponential increase of the variance of the method with the evolution
time, associated to the non-Markovian character of the physical evolution. Further
approximations give rise to the Wigner-Boltzmann equation, where the spatial
evolution is entirely quantum mechanical, while the electron-phonon interaction is
classical. The application of the Iteration Approach to the adjoint equation leads to
a fundamental particle picture. Quantum particles retain their classical attributes
like position and velocity but are associated with novel attributes like weight
that carries the quantum information. The weight changes its magnitude and sign
during the evolution according to certain rules, and two particles meeting in the
phase space can merge into one with their weights. Two algorithms, the Wigner
Weighted and the Wigner Generation algorithms, are derived for the case
of stationary problems determined by the boundary conditions. The latter algorithm
refines the particle attributes by replacing the weight with a particle sign, so that
now particles are generated according to certain rules and can also annihilate each
other. These concepts have been generalized during the last decade for the transient
Wigner-Boltzmann equation, where also a discrete momentum is introduced. The
corresponding Signed-Particle algorithm allowed the computation of multi-
dimensional problems. It has been shown that the particle generation or annihilation
process in the discrete momentum space is an alternative to Newtonian acceleration.
Furthermore, the concepts of signed particles allow for an equivalent formulation of
the Wigner quantum mechanics and thus to interpret and understand the involved
quantum processes. Recently, the signed-particle approach has been adapted to
study entangled systems, many body effects in atomic systems, neural networks, and
problems involving the density functional theory. However, such application aspects
are beyond the scope of this book. Instead, we focus on important computational
aspects such as convergence of the Neumann series expansion of the Wigner
equation, the existence and uniqueness of the solution, numerical efficiency, and
scalability.
A key message of the book is that the enormous success of quantum particle
algorithms is based on computational experience accumulated in the field for more
than 50 years and rooted in the classical Monte Carlo algorithms.
This book is divided into three parts.
The introductory Part I is intended to establish the concepts from statistical
mechanics, solid-state physics, and quantum mechanics in phase space that are
necessary for the formulation of mathematical models. It discusses the role and
problems of modeling semiconductor devices, introduces the semiconductor proper-
ties, phase space and trajectories, and the Boltzmann and Wigner equations. Finally,
it presents the foundations of Monte Carlo methods for the evaluation of integrals,
solving integral equations, and reformulating the problem with the use of the adjoint
equation.
Part II considers the development of Monte Carlo algorithms for classical
transport. It follows the historical layout, starting with the Monte Carlo
Single-Particle and Ensemble algorithms. Their generalization under
the Iteration Approach, its application to weak signal analysis, and the derivation of
a general self-consistent Monte Carlo algorithm with weights for the mixed problem
viii Preface

with initial and boundary conditions are discussed in detail. The development and
application of the classical Monte Carlo algorithms is formulated with the help of
seven asserts, ten theorems, and thirteen algorithms.
Part III is dedicated to quantum transport modeling. The derivation of a hierarchy
of models ranging from the generalized Wigner equation for the coupled electron-
phonon system to the classical Boltzmann equation is presented. The development
of respective algorithms on the basis of the Iteration Approach gives rise to
an interpretation of quantum mechanics in terms of particles that carry a sign.
Stationary and transient algorithms are presented, which are unified by the concepts
of the signed-particle approach. Particularly interesting is the Signed-Particle
algorithm suitable for transient transport simulation. This part is based on five
theorems and three algorithms, which in particular shows that stochastic modeling
of quantum transport is still in an early stage of development as compared to the
classical counterpart. Auxiliary material and details concerning the three parts are
given in the Appendix.
The targeted readers are from the full range of professionals and students with
pertinent interest in the field: engineers, computer scientists, and mathematicians.
The needed concepts and notions of solid-state physics and probability theory
are carefully introduced with the aim at a self-contained presentation. To ensure
a didactic perspective, the complexity of the algorithms raises consecutively.
However, certain parts of specific interest to experts are prepared to provide a stand-
alone read.

Vienna, Austria Mihail Nedjalkov


Sofia, Bulgaria Ivan Dimov
Vienna, Austria Siegfried Selberherr
April 2020
Introduction to the Parts

The introductory Part I aims to present the engineering, physical, and mathematical
concepts and notions needed for the modeling of classical and quantum transport
of current in semiconductor devices. It contains four chapters: “Concepts of Device
Modeling” considers the role of modeling, the basic modules of device modeling,
and the hierarchy of transport models which describe at different levels of physical
complexity the electron evolution, or equivalently, the electron transport process.
This process is based on the fundamental characteristics of an electron, imposed by
the periodic crystal lattice where it exists. The lattice affects the electron evolution
also by different violations of the periodicity, such as impurity atoms and atom
vibrations (phonons). Basic features of crystal lattice electrons and their interaction
with the phonons are presented in the next chapter “The Semiconductor Model:
Fundamentals”. The third chapter “Transport Theories in Phase Space” is focused
on the classical and quantum transport theory, deriving the corresponding equations
of motion that determine the physical observables. In Chapter 4, we introduce the
basic notions of the numerical Monte Carlo theory that utilizes random variables
and processes for evaluation of sums, integrals, and integral equations. The needed
concepts of the probability theory are presented in the Appendix. We stick to a
top-level description to introduce to nonexperts in the field and to underline the
mutual interdependence of the physical and mathematical aspects. Looking for a
self-contained presentation, we need sometimes to refer to text placed further in
the sequel. Our opinion is that this is a better alternative to many references to the
specialized literature in the involved fields.
Part II considers the Monte Carlo algorithms for classical carrier transport
described by the Boltzmann equation, beginning with the first approaches for mod-
eling of stationary or time-dependent problems in homogeneous semiconductors.
The homogeneous phase space is represented by the components of the momentum
variable, which simplifies the physical transparency of the transport process.
The first algorithms, the Monte Carlo Single-Particle and the Ensemble
algorithms, are derived by phenomenological considerations and thus are perceived
as emulation of the natural processes of drift and scattering determining the carrier
distribution in the momentum space. Later, these algorithms are generalized for the

ix
x Introduction to the Parts

inhomogeneous task, where the phase space is extended by the spatial variable.
In parallel, certain works evolve the initially intuitive link between mathematical
and physical aspects of these stochastic algorithms by proving that they provide
solutions of the corresponding Boltzmann equations.
The next generation algorithms are already devised with the help of alternative
formulations of the transport equation. These are algorithms for statistical enhance-
ment based on trajectory splitting, of a backward (back in time) evolution and of
a trajectory integral. It appears that these algorithms have a common foundation:
They are generalized by the Iteration Approach, where a formal application of
the numerical Monte Carlo theory to integral forms of the Boltzmann equation
allows to devise any of these particular algorithms. This approach gives rise to
a novel class of algorithms based on event biasing or can be used to novel
transport problems such as for small signal analysis discussed in Chap. 7. The
computation of important operational device characteristics imposes the problem
of stationary inhomogeneous transport with boundary conditions. The Monte Carlo
Single-Particle algorithm used for this problem, initially devised by phe-
nomenological considerations and by an assumption for ergodicity of the system, is
now obtained by the Iteration Approach, and it is shown that the ergodicity follows
from the stationary physical conditions: Namely, it is shown that the average over
an ensemble of nonequilibrium, but macroscopically stationary system of carriers,
can be replaced by a time average over a single carrier trajectory. From a physical
point of view, the place of the boundaries is well defined by the device geometry.
From a numerical point of view, an iteration algorithm (but not its performance)
should be independent of the choice of place and shape of the boundaries. An
analysis is presented, showing that the simulation in a given domain provides such
boundary conditions in an arbitrary subdomain and that if the latter are used in a
novel simulation in the subdomain, the obtained results will coincide with those
obtained from the primary simulation.
The most general transport problem, determined by initial and boundary con-
ditions, is considered at the end of Part II. The corresponding random variable is
analyzed and the variance evaluated. Iteration algorithms for statistical enhancement
based on event biasing are derived. Finally, the self-consistent coupling scheme of
these algorithms with the Poisson equation is derived.
Part III is devoted to the development of stochastic algorithms for quantum
transport, which is facilitated by the experience accumulated from the classical
counterpart. The same formal rules for application of the Monte Carlo theory hold
for the quantum case; in particular, an integral form of the transport equation is
needed. Furthermore, a maximal part of the kernel components must be included
in the transition probabilities needed for construction of the trajectories. The pecu-
liarity now is that there is no probabilistic interpretation of the kernel components,
inherent for classical transport models. Quantum kernels are in principle oscillatory.
The way to treat them is to decompose the kernel into a linear combination of
positive functions and then to associate the corresponding signs to the random
variable. This is facilitated by the analogy between the classical and the quantum
(Wigner) theories and the fact that most of the concepts and notions remain valid
Introduction to the Parts xi

in both formalisms. The level of abstraction of the forward in time algorithms


corresponds to that of the classical Weighted Ensemble algorithm, while
the backward algorithms are formal enough not to discriminate classical from
quantum tasks. In this respect, Part III focuses on the derivation of quantum
transport models that account on a different level of approximation for the important
physical phenomena composing the transport process. Backward algorithms used
for the quantum carrier-phonon models are not discussed in favor of the forward
approach to the Wigner-Boltzmann equation. The forward approach allows for
alternative kernel decompositions and their stochastic interpretation. The existence
of trajectories is associated with particles, whose properties are dictated by the
particular decomposition. Quantum particle concepts and notions are consecutively
introduced and unified into particle models, composing the Wigner signed-particle
approach. Synergistically, Wigner signed particles provide a heuristic interpretation
and explanation of the physical processes behind complicated mathematical expres-
sions. In contrast to the classical algorithms, which give mathematical abstraction
of the involved physics, here we observe the inverse situation: Quantum algorithms
give insight into the highly counterintuitive quantum physics. The signed-particle
approach has been used with alternative computational models successfully applied
to analyze transport in modern multi-dimensional nanostructures.
Contents

Part I Aspects of Electron Transport Modeling


1 Concepts of Device Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 About Microelectronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 The Role of Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Modeling of Semiconductor Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Basic Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 Transport Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 Device Modeling: Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 The Semiconductor Model: Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1 Crystal Lattice Electrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Band Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.2 Carrier Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.3 Charge Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Lattice Imperfections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Phonons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.2 Phonon Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3 Transport Theories in Phase Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1 Classical Transport: Boltzmann Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1.1 Phenomenological Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1.2 Parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.3 Classical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 Quantum Transport: Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.1 Operator Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.2 Quantum Mechanics in Phase Space . . . . . . . . . . . . . . . . . . . . . . 33
3.2.3 Derivation of the Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.4 Properties of the Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.5 Classical Limit of the Wigner Equation . . . . . . . . . . . . . . . . . . . 37

xiii
xiv Contents

4 Monte Carlo Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39


4.1 The Monte Carlo Method for Solving Integrals . . . . . . . . . . . . . . . . . . . . 39
4.2 The Monte Carlo Method for Solving Integral Equations . . . . . . . . . . 40
4.3 Monte Carlo Integration and Variance Analysis . . . . . . . . . . . . . . . . . . . . 42

Part II Stochastic Algorithms for Boltzmann Transport


5 Homogeneous Transport: Empirical Approach . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1 Single-Particle Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.1 Single-Particle Trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.1.2 Mean Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1.3 Concept of Self-Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.1.4 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2 Ensemble Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Algorithms for Statistical Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6 Homogeneous Transport: Stochastic Approach . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.1 Trajectory Integral Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.2 Backward Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.3 Iteration Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.3.1 Derivation of the Backward Algorithm . . . . . . . . . . . . . . . . . . 58
6.3.2 Derivation of Empirical Algorithms . . . . . . . . . . . . . . . . . . . . . . . 59
6.3.3 Featured Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7 Small Signal Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.1 Empirical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.1.1 Stationary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.1.2 Time Dependent Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.2 Iteration Approach: Stochastic Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.3 Iteration Approach: Generalizing the Empirical Algorithms. . . . . . . 69
7.3.1 Derivation of Finite Difference Algorithms . . . . . . . . . . . . . . . 69
7.3.2 Derivation of Collinear Perturbation Algorithms . . . . . . . . . 70
8 Inhomogeneous Stationary Transport. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.1 Stationary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
8.2 Iteration Approach: Forward Stochastic Model. . . . . . . . . . . . . . . . . . . . . 76
8.2.1 Adjoint Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.2.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.3 Iteration Approach: Single-Particle Algorithm
and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.3.1 Averaging on Before-Scattering States . . . . . . . . . . . . . . . . . . . . 82
8.3.2 Averaging in Time: Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
8.3.3 The Choice of Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
8.4 Iteration Approach: Trajectory Splitting Algorithm . . . . 89
8.5 Iteration Approach: Modified Backward Algorithm . . . . . . . . . 89
8.6 A Comparison of Forward and Backward Approaches. . . . . . . . . . . . . 91
Contents xv

9 General Transport: Self-Consistent Mixed Problem . . . . . . . . . . . . . . . . . . . 93


9.1 Formulation of the Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.2 The Adjoint Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.3 Initial and Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
9.3.1 Initial Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
9.3.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
9.3.3 Carrier Number Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
9.4 Stochastic Device Modeling: Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
10 Event Biasing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
10.1 Biasing of Initial and Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . 108
10.1.1 Initial Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10.1.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10.2 Biasing of the Natural Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
10.2.1 Free Flight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
10.2.2 Phonon Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
10.3 Self-Consistent Event Biasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Part III Stochastic Algorithms for Quantum Transport


11 Wigner Function Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
12 Evolution in a Quantum Wire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
12.1 Formulation of the Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
12.2 Generalized Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
12.3 Equation of Motion of the Diagonal Elements. . . . . . . . . . . . . . . . . . . . . . 127
12.4 Closure at First-Off-Diagonal Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
12.5 Closure at Second-Off-Diagonal Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
12.5.1 Approximation of the fF+OD Equation . . . . . . . . . . . . . . . . . . . . 132
12.5.2 Approximation of the fF−OD Equation . . . . . . . . . . . . . . . . . . . . 141
12.5.3 Closure of the Equation System . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
12.6 Physical Aspects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
12.6.1 Heuristic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
12.6.2 Phonon Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
13 Hierarchy of Kinetic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
13.1 Reduced Wigner Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
13.2 Evolution Equation of the Reduced Wigner Function . . . . . . . . . . . . . . 150
13.3 Classical Limit: The Wigner-Boltzmann Equation . . . . . . . . . . . . . . . . . 151
14 Stationary Quantum Particle Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
14.1 Formulation of the Stationary Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
14.1.1 The Stationary Wigner-Boltzmann Equation . . . . . . . . . . . . . . 156
14.1.2 Integral Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
14.2 Adjoint Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
14.3 Iterative Presentation of the Mean Quantities . . . . . . . . . . . . . . . . . . . . . . . 159
xvi Contents

14.4 Monte Carlo Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160


14.4.1 Injection at Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
14.4.2 Probability Interpretation of K̃ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
14.4.3 Analysis of à . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
14.5 Stochastic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
14.5.1 Stationary Wigner Weighted Algorithm . . . . . . . . . . . . . 163
14.5.2 Stationary Wigner Generation Algorithm . . . . . . . . . . 165
14.5.3 Asymptotic Accumulation Algorithm . . . . . . . . . . . . 167
14.5.4 Physical and Numerical Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
15 Transient Quantum Particle Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
15.1 Bounded Domain and Discrete Wigner Formulation . . . . . . . . . . . . . . . 176
15.1.1 Semi-Discrete Wigner Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
15.1.2 Semi-Discrete Wigner Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
15.1.3 Semi-Discrete Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
15.1.4 Signed-Particle Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 183
15.2 Simulation of the Evolution Duality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
15.3 Iteration Approach: Signed Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
A.1 Correspondence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
A.2 Physical Averages and the Wigner Function . . . . . . . . . . . . . . . . . . . . . . . . 192
A.3 Concepts of Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
A.4 Generating Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
A.5 Classical Limit of the Phonon Interaction. . . . . . . . . . . . . . . . . . . . . . . . . . . 201
A.6 Phonon Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
A.7 Forward Semi-Discrete Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Part I
Aspects of Electron Transport Modeling
Chapter 1
Concepts of Device Modeling

The historical development of microelectronics is characterized by the growing


importance of modeling, which gradually embraced all stages of preparation and
operation of an integral circuit (IC). Device modeling focuses on the processes
determining the electrical properties and behavior of the individual IC elements. The
trends towards their miniaturization imposes the development of refined physical
models, aware of a growing variety of processes occurring in the range from micro-
scopic to nanoscopic spatial scales with picoseconds to femtoseconds time duration.
Device modeling comprises two fundamental modules of coupled electromagnetic
and transport equations. The former provides the governing forces, which arise
primary due to the differences of the potentials applied to the contacts. The latter
links the material properties of the semiconductor with the carrier dynamics caused
by these forces depending on the transport description [2]. The material properties
such as band structure and effective mass, characteristic vibrations of the lattice
dielectric, and other characteristic coefficients are specified by the semiconductor
model. Good understanding of semiconductor physics is needed to synthesize
the semiconductor model, which is then further used to formulate a hierarchy
of transport models. This hierarchy unifies fundamental concepts and notions of
solid-state physics, statistical mechanics, and quantum mechanics in phase space,
introduced in this section.

1.1 About Microelectronics

The development of the semiconductor electronic industry can be traced back


to 1947, to the invention of the transistor by a Bell Telephone Labs group of
scientists jointly awarded the 1956 Nobel Prize in Physics for their achievement [6].
Subject of initial interest were bipolar devices based on poly-crystalline germanium,
but soon the focus migrated towards mono-crystalline materials and especially

© Springer Nature Switzerland AG 2021 3


M. Nedjalkov et al., Stochastic Approaches to Electron Transport
in Micro- and Nanostructures, Modeling and Simulation in Science,
Engineering and Technology, https://doi.org/10.1007/978-3-030-67917-0_1
4 1 Concepts of Device Modeling

silicon. The MOSFET (Metal Oxide Semiconductor Field Effect Transistor) has
been established as a basic circuit element and the field effect as fundamental
principle of operation of electronic structures [7]. Since the very beginning, circuit
manufacturers have been able to pack more and more transistors onto a single
silicon chip. The following facts illustrate the revolutionary onset of the process of
integration: (1) A chip developed in 1962 has 8 transistors; (2) the chips developed
each next year from 1963 to 1965 are with 16, 32 and 64 transistors respectively [8].
MOS memories appeared after 1968, the first microprocessor in 1971, and 10 years
later there are already computers based on large scale integration (LSI) circuits.
Gordon Moore [9], one of the founders of Intel, observed in accordance with (1)
and (2) that the number of transistors was doubling every year and anticipated in
1965 that this rate of growth will continue at least a decade. Ten years later, looking
at the next decade he revised his prediction to doubling at a 2 years base. This
tendency continued for more than 40 years, so that today the most complex silicon
chips have 10 billion transistors—a billion times increase of the transistor density
for the period.
The dimensions of the transistors shrink accordingly. Advances of the lithogra-
phy, that is to say the accuracy of patterning, evolves at steps called technology
nodes. The latter refers to the size of the transistors in a chip or half of the typical
distance between two identical circuit elements. The technology node of 100 nm
and below started to be admissible for the industry around 2005. The Intel Pentium
D processor, one of the early workhorses for desktop computers, is based on a
90 nm technology, while during the last decade lithographic processes at 45, 22
and 11 nm became actual [10]. The wide accepted definition of nanotechnology
involves critical dimensions below 100 nm, so that we entered the nanoera of the
semiconductor technology during the first decade of the twenty-first century.
New architectures are needed as the dimensions of transistors are scaled down
to keep the same conventional functions and compensate for new phenomena
appearing as the thickness of the gate decreases. Accordingly, the structure of
the IC elements gets more complicated aiming to maintain the performance and
functionality. The planar in design complementary metal–oxide–semiconductor
(CMOS) device has been suggested in 1963 as an ultimate solution for ICs [11].
By the early 1970s CMOS transistors gradually became the dominant device type
for many integrated electronic applications [8]. The third dimension of the device
design was added to the planar technology in 1979 with the three-dimensional (3D)
CMOS transistor pair, called also CMOS hamburger [11].
Shrinking the sizes raises the influence of certain physical phenomena on the
operational characteristics of the transistors. Among them are the so-called ’short
channel’ effects which impact negatively the electrical control. Novel 3D structures,
FinFETs also known as Tri-gate transistors, have been developed at the end of the
century to reduce short channel effects. The active region of current flow, called
channel, is a thin semiconductor (Si, or a high mobility material like Ge or a III-
V compound) “fin” surrounded by the gate electrode. This allows for both a better
electrostatic control and for a higher drive current. The FinFET architecture became
the primary transistor design under the 22 nm technology node. Here however
1.1 About Microelectronics 5

quantum processes begin to modify fundamental classical transport notions. This


is related to the transition from delocalized carriers, featuring the transport in a bulk
semiconductor to confined in one or more directions carriers. Energy quantization
of the confined carrier states gives rise to complicated energy selection relations.
Furthermore, the carrier is localized in the directions of confinement, which leads
to a momentum uncertainty and thus to a lack of momentum conservation [12]. The
latter characterizes the next promising design of quantum electronic devices, with
prototypical implementations based on nanosheets, nanowires, and quantum dots
[13].
An important negative effect which appears on the nanometer scale is related to
heat generation, which poses a further limitation to packing more transistors onto a
chip. While transistors get smaller, their power must remain constant, which poses
a barrier to the clock speed. The frequency is limited to around 4 GHz since 2005
because of power density and heat issues. Furthermore, the granularity of matter
begins to influence the transistor functionality. The radius of a silicon atom is around
0.2 nm, so that the active regions of the current devices are composed by few tens
atomic layers. The statistical and lithography induced imperfections of the crystal
lattice at this scale cause a significant variability of the operational characteristic of
the transistor [12]. It is thus not surprising, that GlobalFoundries, a company which
produces very advanced silicon chips like Intel, AMD, Qualcomm and others, are
reluctant with efforts for further shrinking of the transistors below the 7 nm node
[14].
The combination of these physical and technological limitations and the related
economical aspects such as exploding fabrication costs indicate the termination of
the computer power growth by means of miniaturization. Other types of innovations
are desired to replace the 60 years tradition of growing the number and speed
of transistors. The number of the latter, is only 10 times less than the number of
neurons featuring human brains for ‘at least 35,000 years’ [14]. Yet the evolution of
civilization suggests to expect innovations at first place from new applications and
new features due to software and interfaces in terms of displays and sensors. Emerg-
ing high performance, cloud computing, recognition, and navigation technologies
are the first examples. Furthermore, new chip architectures designed for special
purposes and new ways for packaging and interconnecting the chips will be pursued
to increase the efficiency. Finally new types of devices and memories based on novel
operational principles will be developed. All these ways of innovation rely on the
detailed knowledge of the processes which govern the nanometer and femtosecond
physics. Furthermore, these processes depend not on the agglomeration of the
individual phenomena, as is the case of agglomeration of probabilities. Important
now is their particular interrelation: quantum processes are ruled by superposition of
amplitudes. A small change of a characteristic value of one of them can give rise to
a very different physical picture. A good example is the superposition of the waves
originating from two sources. A small change of the frequency of one of them gives
rise to a macroscopically different interference pattern. The experimental approach
to the analysis of such processes by repetition of many similar measurements is a
challenging task. On contrary such knowledge is attainable by approaches based
6 1 Concepts of Device Modeling

on simulation and modeling, which shows their importance for global physical
analysis.
In particular, their growing significance for the semiconductor industry can be
traced back in time in the published since 1992 roadmap [15], where traditionally
exists a chapter on modeling and simulation. Further details are given in the next
section.

1.2 The Role of Modeling

We summarize the factors which move forward the need of development and
refinement of modeling approaches for the semiconductor electronics.
• Raising importance for economy and society give rise to a rapid development.
Currently MOSFET based VLSI circuits give about 90% contribution to the
market of semiconductor electronics. The mutual tendency for reducing the
price/functionality ratio has been attained by mainly reducing the dimensions
of the circuit elements.
• The implementation of a smaller technology node requires novel equipment
which is related to considerable investments. Typically, the manufacturing plant
cost doubles every 3 years or so [10]. Currently the costs exceed 10 billion
dollars, which gives one dollar per transistor—a ratio characterizing the novel
technologies.
• The cycles of production become shorter with every new IC generation. The
transition from design to mass production (time-to-market) must be shortened
under the condition that the time for fabrication of a wafer composed of
integrated circuits increases with the increase of the complexity of the involved
processes imposed by the miniaturization. Thus the device specifications must be
close to the optimal ones in order to reduce the laboratory phase of preparation.
Here modeling is needed to provide an initial optimization.
• The cost of the involved resources increases with the complexity of the IC’s,
which makes the application of any standard trial-error experiments impossible
or at least very difficult.
Modeling in microelectronics is divided in topics which reflect the steps in
the organization of given IC’s: (1) simulation of the technological processes for
formation and linking of the circuit elements; (2) simulation of the operation
of the individual devices; (3) simulation of the operation of the whole circuit
which typically comprises a huge number of devices. Process simulation (1) treats
different physical processes such as material deposition, oxidation, diffusion, ion
implantation, and epitaxy to provide information about the dimensions, material
composition, and other physical characteristics, including the deviation and variabil-
ity from the ideal (targeted) device parameters. Device simulation (2) analyzes the
electrical behavior of a given device determined by the desired operating conditions
and the transport of the current through it. This information is needed at level
1.3 Modeling of Semiconductor Devices 7

(3) for a development of compact models which can describe the behavior of a
complete circuit within CAD packages such as [16] for IC design. Device modeling
plays a central role in this hierarchy by focusing on the physical processes and
phenomena, which in accordance with the material and structural characteristics
determine the current transport and thus the electrical behavior of a given device.
Device modeling relies on mathematical, physical, and engineering approaches to
describe this behavior, which, determines the electrical properties of the final circuit.
Nowadays semiconductor companies reduce cost and time-to-market by using a
methodology called Design Technology Co-Optimization (DTCO), which considers
the whole line from the design of the individual device and the involved material,
process, and technology specifications to the functionality of the corresponding
circuits, systems, and even products. DTCO establishes the link between circuit
performance and the transistor characteristics by circuit to technology computer-
aided design simulations necessary to evaluate the device impact on specific
circuits and systems, which are the target of the technology optimization. In
particular modeling of new materials, new patterning techniques, new transistor
architectures, and the corresponding compact models are used to consistently link
device characteristics with circuit performance. In this way the variability sources
which characterize the production flow of advanced technology nodes and impacts
the final product functionality are consistently taken into account [17].
On the device level variability originates from lithography, other process imper-
fections (global variability), and from short range effects such as discreteness of
charges and granularity of matter (local or statistical variability). Device simulations
which take into account variability are already above four-dimensional: The degrees
of freedom, introduced by the statistical distribution of certain device parameters,
add to the three-dimensional device design. The simulation of a single, nominal
device must be replaced by a set of simulations of a number of devices featured by
e.g. geometry variations (line edge roughness) and granularity (e.g. random dopant
distributions). This adds a considerable additional simulation burden and raises
the importance of refined simulation strategies. Often the choice of the simulation
approach is determined by a compromise between physical comprehension and
computational efficiently.

1.3 Modeling of Semiconductor Devices

1.3.1 Basic Modules

Device modeling comprises two interdependent modules, which must be considered


in a self-consistent way. An electromagnetic module comprises equations for the
fields, which govern the flux of the current carriers, which in its turn is governed by
a module with the transport equations. The fields are due to external factors charac-
terizing the environment, such as potentials applied on the contacts, the density of
8 1 Concepts of Device Modeling

Fig. 1.1 Transport models

the carriers n and their flux J. The latter are input quantities for the electromagnetic
module, which, in accordance with the Maxwell equations determine the electric,
E, and magnetic H, fields. The corresponding forces accelerate the carriers and thus
govern their dynamics. Output quantities are the current-voltage (IV) characteristics
of the device, the densities of the carriers, current and their mean energy and
velocity.
For a wide range of physical conditions, especially valid for semiconductors, the
Maxwell equations can be considered in the electrostatic limit. In what follows we
assume that there is no applied magnetic field so that, in a scalar potential gauge, the
electromagnetic module is reduced to the Poisson equation for the electric potential
V . The equation is considered in most textbooks [2], so that we omit any details.
For the purposes of this book it is sufficient to consider the module a black box
with input parameters being the carrier concentration and the boundary conditions
given by the terminal potentials and/or their derivatives, and the spatial distribution
of the electric potential as an output quantity. We focus on the transport module and
in particular on the transport models, unified in the hierarchical structure shown on
Fig. 1.1.

1.3.2 Transport Models

The hierarchy of transport models reflects the evolution of the field so that their
presentation is in accordance with the evolution of microelectronics. These models
are relevant for particular physical conditions. In general the spatial and temporal
scales of carrier transport decrease in vertical direction. At the bottom are the
analytical models valid for large dimensions and low frequencies characterizing
1.3 Modeling of Semiconductor Devices 9

the infancy of microelectronics.1 For example the 1971 year Intel 4004 processor
comprises 2300 transistors based on 10µ silicon technology with a frequency
of 400 kHz. The increased physical complexity of the next generations devices
becomes too complicated for an analytical description. Relevant become models
based on the drift-diffusion equation, which requires a numerical treatment. The
increase of the operating frequency imposed a system of differential equations,
known as the hydrodynamic model. These models can be derived from phenomeno-
logical considerations, or from the leading transport model, the Boltzmann equation,
under the assumption of a local equilibrium of the carriers. Further scaled devices,
operate at the sub-micrometer scale, where the physical conditions challenge the
assumption of locality. Nonequilibrium effect brought by ballistic and hot electrons
impose to search for ways of solving the Boltzmann equation itself. The equation
represents the most comprehensive model, describing carrier transport in terms
of the classical mechanics. In contrast to the previous models, which utilize
macroscopic parameters, the equation describes the carriers on a microscopic level
by involving the concept of a distribution function in a phase space. The current
carriers are point-like particles, accelerated over Newtonian trajectories by the
electromagnetic forces - a process called drift. The drift is interrupted by local in
space and time scattering processes which change the particle momentum and thus
the trajectory. The scattering, caused by lattice imperfections such as vibrations,
vacations, and impurities, is described by functions, which, despite calculated by
means of quantum mechanical considerations have a probabilistic meaning. As a
result the equation is very intuitive and physically transparent so that it can be
derived by using phenomenological considerations similarly to the hydrodynamic
and drift-diffusion counterparts.
The simulation approaches developed to solve the Boltzmann equation distin-
guish the golden era of device modeling. A stable farther reduction of device
dimensions has been achieved with their help. For example the introduced in 2006
Intel Core2 brand contains single-, dual- and quad-core processors which are based
on a 45 nm technology and strained silicon, contain of order of 109 transistors and
reach an operating frequency of 3 GHz.
The nanometers scale of the active regions of nowadays devices and the
terahertz scale of electromagnetic radiation reached manifest the beginning of the
nanoera of semiconductor electronics. The physical limits of materials, key concepts
and technologies associated with the revolutionary development of the field are
now approached. Novel effects and phenomena due to granularity of the matter,
finite size of the carriers, finite time of scattering, ultra-fast evolution and other
phenomena which are beyond the Boltzmann model of classical transport arise.
Quantum processes begin to dominate the carrier kinetics and need a corresponding

1 Itshould be noted that analytic models are widely used even now, however, for simulation
of complete circuits comprised of huge number of transistors. These are the so-called compact
models, partially developed by using transistor characteristics, provided by device simulations and
measurements.
10 1 Concepts of Device Modeling

description. This is provided at different level of complexity by the models in the


upper half of Fig. 1.1.
Quantum transport models are traditionally apprehended as ‘difficult challenges’
[15]. Difficulties arise already in the choice between the many formulations of
quantum mechanics, where the concepts and notions vary between operators
in a Hilbert space to a phase space. There is a considerable lack of physical
transparency, how to link the formalism with the concrete simulation task. Once
the formalism for a given model is chosen, a challenge becomes the level of
comprehension, which is inversely proportional to the numerical aspects. Here the
choice is between comprehensive physics and efficient numerical behavior. Finally
all quantum-mechanical computations represent the so-called ‘sign problem’, where
large numbers with opposite signs cancel to give a small mean value. A typical
example is the effort to evaluate the exponent of a large negative number using the
Taylor expansion.

1.3.3 Device Modeling: Aspects

The importance of the mathematical methods in the field of device modeling


is twofold. The level of complexity of a given model relies on a consistent set
of assumptions and approximations to reduce the general physical description
to a computationally affordable formulation of the task. Furthermore, efficient
computational approaches are needed. In particular the models in Fig. 1.1 can
be evaluated only numerically. Deterministic methods based on finite differences
and finite elements have been successfully developed for the drift-diffusion and
hydrodynamic models [18]. The development of numerical approaches reflects the
increase of the physical level of comprehension, which in particular imposes a
transition from a one-dimensional to a three-dimensional description. Furthermore,
the progress of the computational resources offers novel ways for improvement of
the efficiency based on high performance computing approaches.
Classical transport at the sub-micrometer scale is characterized by a local non-
equilibrium and needs a microscopic description via the distribution function and
the phase space of particle coordinates and impulses. Thus, when considering
the time variable for transient processes, the Boltzmann equation becomes seven-
dimensional, which poses an enormous computational burden for deterministic
methods.2 This, together with the probabilistic character of the equation itself
stimulated the development of stochastic approaches for finding the distribution
function and the physical averages of interest. The ability of the Monte Carlo
methods to solve efficiently the Boltzmann equation boosted the development of

2 Deterministicapproaches to the problem have been developed recently [19, 20]. They rely on the
power of the modern computational platforms to compute the distribution function in the cases in
which a high precision is needed.
1.3 Modeling of Semiconductor Devices 11

the field. This enables a powerful tool for analysis of different processes and
phenomena, development of novel concepts and structures and design of novel
devices. In parallel, already 40 years are devoted for refinement and development
of novel algorithms which to meet the challenges posed by the development of the
physical models. One of our goals is to follow the thread from the first intuitive
algorithms, devised by phenomenological considerations to the formal iteration
approach which unifies the existing algorithms and provides a platform for devising
of novel ones.
The conformity between the physical and numerical aspects characterizing the
classical models is lacking for the models in the upper, quantum part of Fig. 1.1.
The reasons for this are associated with the fact that this area of device modeling
is relatively young and still under development, the variety of representations of
quantum mechanics, the physical and numerical complexity, and the lack of physical
transparency which does not allow development of intuitive approaches. Thus, an
universal quantum transport model is missing along with a corresponding numerical
method for solving it.
The NonEquilibrium Green’s Functions (NEGF) formalism provides the most
comprehensive physical description and thus is widely used in almost all fields of
physics, from many body theory to seismology. Introduced by methods of many
body perturbation theory[21], the theory has been re-introduced starting from the
one-electron Schrödinger equation [22] in a convenient for device modeling terms,
so that it soon became the approach of choice of a vast number of researchers in the
nanoelectronic community. The formalism accounts for both spatial and temporal
correlations as well as decoherence processes such as interaction with lattice vibra-
tions (phonons). In general there are two position and two time coordinates, which
indicates a non-Markovian evolution. A deterministic approach to the model, based
on a recursive algorithm, allows the simulation of two-dimensional problems (four
spatial variables) in the case of ballistic transport, where processes of decoherence
are neglected [23].
The numerical burden characterizing two-dimensional problems can be further
reduced by a variable decomposition which discriminates a transport direction, by
assuming a homogeneous in the normal to the transport direction potential. The
inclusion of processes of decoherence like interaction with phonons at different
levels of approximation, [24, 25] seriously impedes the computational process. The
same holds true for self-consistent coupling with the Poisson equation. In general
the NEGF formalism provides a feasible modeling approach for near-equilibrium
transport in near ballistic regimes, when the coupling with the sources of scattering
is weak.
The next level of description simplifies the physical picture on the expense of the
correlations in time. The unitary equivalents, linked by a Fourier transform density
matrix and Wigner function formalisms, maintain a Markovian evolution in time,
which makes them convenient for the treatment of time dependent problems. The
former is characterized by two spatial coordinates, while in the latter one of the
spatial coordinates is replaced by a momentum and thus is known as ‘quantum
mechanics in a phase space’.
12 1 Concepts of Device Modeling

The density matrix describes mixed states, consisting of statistical ensembles of


solutions of the Schrödinger equation, known as pure states and thus introduces
statistics in quantum mechanics. In this way the interaction of a system A with
another system B can be associated with a statistical process in A, described in
terms of A-states. The density matrix is the preferred approach to many problems
such as analysis of decoherence and by the quantum information and quantum
optics communities. In particular, the semiconductor Bloch equations, describe the
evolution of the density matrices corresponding to interacting systems of electrons,
holes (introduced in the next section) and polarization, induced by electromagnetic
radiation (laser pulse) under the decoherence processes of interaction with an equi-
librium phonon systems. The equation set provides the basis for the development
of terahertz quantum cascade lasers and the analysis of ultra-fast coherent and
decoherent carrier dynamics in photo-excited semiconductors [26–28].
Historically the Wigner function has been introduced via the density matrix
[29]. Actually Wigner’s development [30] was not aimed at a novel formulation
of quantum mechanics, but rather for analysis of quantum corrections to classical
statistical mechanics The goal was to link the Schrödinger state to a probability
distribution in phase space. The theoretical notions of the early Wigner theory were
derived on top of the operator mechanics [31] and thus often accepted as an exotic
extension of the latter. It took more than two decades to prove the independence
of the theory. Due to the works of Moyal and Groenewold, the formalism has been
established as a self-contained formulation of quantum mechanics in terms of the
Moyal bracket and the star-product [32–34]. In this way the Wigner formalism is
fully equivalent to the operator mechanics, which can be derived from the phase
space quantum notions [31].
We follow the more intuitive historical way to comment the basic notions of the
phase space theory. In the Wigner picture both observables and states are functions
of the position and the momentum, which define the phase space. The main difficulty
is to establish a map between the set of the wave mechanical observables, which
are functions of the position and the momentum operators and the set of the real
functions of position and momentum variables. Since the former do not commute,
algebraically equivalent functions of these variables give rise to different operators.
The correspondence is established by the Weyl transform [35], but it should be
noted that alternative ways of mapping can be postulated, giving rise to different
phase space theories. The interest to the Wigner function, (and also to alternative
phase space formulations) is the rising importance of problems and phenomena
which require a nonequilibrium statistical description. Furthermore, many classical
concepts and notions remain in the quantum counterpart. In particular physical
observables are represented in both cases by the same phase space functions, which
gives a very useful way to outline and analyze quantum effects.
It is then surprising that the Wigner formalism has been applied in the field of
device modeling relatively late - just in the last decade of the twentieth century
[36]. Especially having in mind that this quantum theory is the logical counterpart
of the Boltzmann transport theory. The correspondence is so native that the Wigner
function is often called a quasi-distribution function. Especially convenient is the
1.3 Modeling of Semiconductor Devices 13

Fig. 1.2 Hierarchy of transport models based on the Wigner function

ability of the function in analyzing collision dynamics in a variety of physical


systems [37]. This has been used to associate decoherence to the coherent Wigner
equation. A decoherence term is introduced, which moreover can be presented by
the same operator, used to describe the classical scattering dynamics. Initially it
has been introduced by virtue of phenomenological considerations [36], while the
theoretical derivation of the generalized equation, called the Wigner-Boltzmann
equation has been done later as a e result of a hierarchy of assumptions and
approximations [3, 38]. The corresponding hierarchy of models is presented in
Fig. 1.2. These models are ordered according to the complexity of the description of
the processes of decoherence, while the operators describing the coherent part of the
dynamics remain on a rigorous quantum level. A classical limit of these operators
recovers the Boltzmann equation, posed at the bottom of the hierarchy to outline the
origin of the Monte Carlo methods for Wigner transport.
The first applications of the Wigner formalism for device modeling is charac-
terized by a deterministic approach to coherent transport problems. This approach
was successfully applied for simulation of a typically quantum device, the resonant-
tunneling diode [36, 39], Certain problems related to the boundary conditions and
to the stability of the discretization scheme are also resolved. The interest towards
stochastic approaches to the Wigner transport has been announced only several years
after that [40]. The advantage of such approaches is that all classical approaches and
algorithms for scattering from the lattice can be directly adopted in the quantum
case. Thus the basic challenge of the stochastic approach is posed by the coherent
problem. The work on their development was initiated in collaboration between the
Bulgarian Academy of Sciences and the University of Modena [40–42]. Groups
from the Technische Universität Wien and Arizona State University joined also
14 1 Concepts of Device Modeling

to this activity [43]. A significant contribution to the development of efficient


stochastic Wigner algorithms and their applications came from the University of
Paris-Sud [3].
Two Monte Carlo approaches, based on the concepts for affinity and for signed
particles, have been developed at the beginning of the twenty-first century and
proved their feasibility to solve actual problems of quantum electronics, see [44] and
the references therein. Here we focus on the second concept, which will be derived
by a formal application of the Monte Carlo theory to an integral equation, adjoint to
the integral form of the Wigner equation. Very important for the development of this
derivation is the experience with the classical algorithms, which suggest to follow
the historical path of development of those approaches.
Another random document with
no related content on Scribd:
CHAPTER II.

Fish.

TO CHOOSE FISH.

The cook should be well acquainted with the signs of freshness and
good condition in fish, as they are most unwholesome articles of
food when stale, and many of
them are also dangerous eating
when they are out of season.
The eyes should always be
bright, the gills of a fine clear
red, the body stiff, the flesh firm,
yet elastic to the touch, and the
smell not disagreeable.
When all these marks are
reversed, and the eyes are
sunken, the gills very dark in Copper Fish or Ham Kettle.
hue, the fish itself flabby and of
offensive odour, it is bad, and
should be avoided. The chloride of soda, will, it is true, restore it to a
tolerably eatable state,[42] if it be not very much over-kept, but it will
never resemble in quality and wholesomeness fish which is fresh
from the water.
42. We have known this applied very successfully to salmon which from some
hours’ keeping in sultry weather had acquired a slight degree of taint, of
which no trace remained after it was dressed; as a general rule, however,
fish which is not wholesomely fresh should be rejected for the table.

A good turbot is thick, and full fleshed,


and the under side is of a pale cream colour
or yellowish white; when this is of a bluish
tint, and the fish is thin and soft, it should be
rejected. The same observations apply
equally to soles.
The best salmon and cod fish are known
by a small head, very thick shoulders, and a
small tail; the scales of the former should be
Mackerel Kettle. bright, and its flesh of a fine red colour; to
be eaten in perfection it should be dressed
as soon as it is caught, before the curd (or
white substance which lies between the flakes of flesh) has melted
and rendered the fish oily. In that state it is really crimp, but
continues so only for a very few hours; and it bears therefore a much
higher price in the London market then, than when mellowed by
having been kept a day or two.
The flesh of cod fish should be white and clear before it is boiled,
whiter still after it is boiled, and firm though tender, sweet and mild in
flavour, and separated easily into large flakes. Many persons
consider it rather improved than otherwise by having a little salt
rubbed along the inside of the back-bone and letting it lie from
twenty-four to forty-eight hours before it is dressed,. It is sometimes
served crimp like salmon, and must then be sliced as soon as it is
dead, or within the shortest possible time afterwards.
Herrings, mackerel, and whitings, unless newly caught, are quite
uneatable. When they are in good condition their natural colours will
be very distinct and their whole appearance glossy and fresh. The
herring when first taken from the water is of a silvery brightness; the
back of the mackerel is of a bright green marked with dark stripes;
but this becomes of a coppery colour as the fish grows stale. The
whiting is of a pale brown or fawn colour with a pinkish tint; but
appears dim and leaden-hued when no longer fresh.
Eels should be alive and brisk in movement when they are
purchased, but the “horrid barbarity,” as it is truly designated, of
skinning and dividing them while they are so, is without excuse, as
they are easily destroyed “by piercing the spinal marrow close to the
back part of the skull with a sharp pointed knife or skewer. If this be
done in the right place all motion will instantly cease.” We quote Dr.
Kitchener’s assertion on this subject; but we know that the mode of
destruction which he recommends is commonly practised by the
London fishmongers. Boiling water also will immediately cause
vitality to cease, and is perhaps the most humane and ready method
of destroying the fish.
Lobsters, prawns, and shrimps, are very stiff when freshly boiled,
and the tails turn strongly inwards; when these relax, and the fish are
soft and watery, they are stale; and the smell will detect their being
so, instantly, even if no other symptoms of it be remarked. If bought
alive, lobsters should be chosen by their weight and “liveliness.” The
hen lobster is preferred for sauce and soups, on account of the coral;
but the flesh of the male is generally considered of finer flavour for
eating. The vivacity of their leaps will show when prawns and
shrimps are fresh from the sea.
Oysters should close forcibly on the knife when they are opened: if
the shells are apart ever so little they are losing their condition, and
when they remain far open the fish are dead, and fit only to be
thrown away. Small plump natives are very preferable to the larger
and coarser kinds.
TO CLEAN FISH.

Let this be always done with the most scrupulous nicety, for
nothing can more effectually destroy the appetite, or disgrace the
cook, than fish sent to table imperfectly cleaned. Handle it lightly,
and never throw it roughly about, so as to bruise it; wash it well, but
do not leave it longer in the water than is necessary; for fish, like
meat, loses its flavour from being soaked. When the scales are to be
removed, lay the fish flat upon its side and hold it firmly with the left
hand, while they are scraped off with the right; turn it, and when both
sides are done, pour or pump sufficient water to float off all the loose
scales; then proceed to empty it; and do this without opening it more
than is absolutely necessary for the purposes of cleanliness. Be sure
that not the slightest particle of offensive matter be left in the inside;
wash out the blood entirely, and scrape or brush it away if needful
from the back-bone. This may easily be accomplished without
opening the fish so much as to render it unsightly when it is sent to
table. When the scales are left on, the outside of the fish should be
well washed and wiped with a coarse cloth, drawn gently from the
head to the tail. Eels to be wholesome should be skinned, but they
are sometimes dressed without; boiling water should then be poured
upon them, and they should be left in it from five to ten minutes
before they are cut up. The dark skin of the sole must be stripped off
when it is fried, but it should be left on it like that of a turbot when the
fish is boiled, and it should be dished with the white side upwards.
Whitings are skinned before they are egged and crumbed for frying,
but for boiling or broiling, the skin is left on them. The gills of all fish
(the red mullet sometimes excepted), must be taken out. The fins of
a turbot, which are considered a great delicacy, should be left
untouched; but those of most other fish must be cut off.
TO KEEP FISH.

We find that all the smaller kinds of fish keep best if emptied and
cleaned as soon as they are brought in, then wiped gently as dry as
they can be, and hung separately by the head on the hooks in the
ceiling of a cool larder, or in the open air when the weather will allow.
When there is danger of their being attacked by flies, a wire safe,
placed in a strong draught of air, is better adapted to the purpose.
Soles in winter will remain good for two days when thus prepared;
and even whitings and mackerel may be kept so without losing any
of their excellence. Salt may be rubbed slightly over cod fish, and
well along the back-bone; but it injures the flavour of salmon, the
inside of which may be rubbed with vinegar and peppered instead.
When excessive sultriness renders all of these modes unavailing,
the fish must at once be partially cooked to preserve it, but this
should be avoided if possible, as it is very rarely so good when this
method is resorted to.
TO SWEETEN TAINTED FISH.

The application of strong vinegar, or of acetic acid (which may be


purchased at the chemists’), will effect this when the taint is but
slight. The vinegar should be used pure; and one wineglassful of the
acid should be mixed with two of water. Pour either of these over the
fish, and rub it on the parts which require it; then leave it untouched
for a few minutes, and wash it afterwards well, changing the water
two or three times. When the fish is in a worse state the chloride of
soda, from its powerful anti-putrescent properties, will have more
effect: it may be diluted, and applied in the same manner as the acid.
Obs.—We have retained here the substance of the directions
which we had given in former editions of this book for rendering
eatable fish (and meat) tainted by being closely packed or overkept;
and it is true that they may be deprived of their offensive flavour and
odour by the application of strong acids and other disinfecting
agents,—Beaufoy’s chloride of soda more especially—but we are
very doubtful whether they can by any process be converted into
unquestionably wholesome food, unless from some accidental
circumstance the mere surface should be affected, or some small
portion of them, which could be entirely cut away. We cannot,
therefore, conscientiously recommend the false economy of
endangering health in preference to rejecting them for the table
altogether.
THE MODE OF COOKING BEST ADAPTED TO DIFFERENT
KINDS OF FISH.

It is not possible, the reader will easily believe, to insert in a work


of the size of the present volume, all the modes of dressing the many
varieties of fish which are suited to our tables; we give, therefore,
only the more essential receipts in detail, and add to them such
general information as may, we trust, enable even a moderately
intelligent cook to serve all that may usually be required, without
difficulty.
There is no better way of dressing a good turbot, brill, John Dory,
or cod’s head and shoulders, than plain but careful boiling. Salmon is
excellent in almost every mode in which it can be cooked or used.
Boiled entire or in crimped slices; roasted in a cradle-spit or Dutch
oven; baked; fried in small collops; collared; potted; dried and
smoked; pickled or soused (this is the coarsest and least to be
recommended process for it, of any); made into a raised or common
pie, or a potato-pasty; served cold in or with savoury jelly, or with a
Mayonnaise sauce; or laid on potatoes and baked, as in Ireland, it
will be found Good.
Soles may be either boiled, or baked, or fried entire, or in fillets;
curried; stewed in cream; or prepared by any of the directions given
for them in the body of this chapter.
Plaice, unless when in full season and very fresh, is apt to be
watery and insipid; but taken in its perfection and carefully cooked, it
is very sweet and delicate in flavour. If large, it may be boiled with
advantage either whole or in fillets; but to many tastes it is very
superior when filleted, dipped into egg and bread-crumbs, and fried.
The flesh may also be curried; or the plaice may be converted into
water-souchy, or soupe-maigre: when small it is often fried whole.
Red mullet should always be baked, broiled, or roasted: it should
on no occasion be boiled.
Mackerel, for which many receipts will be found in this chapter,
when broiled quite whole, as we have directed, or freed from the
bones, divided, egged, crumbed, and fried, is infinitely superior to the
same fish cooked in the ordinary manner.
The whiting, when very fresh and in season, is always delicate and
good; and of all fish is considered the best suited to invalids.
Perhaps quite the most wholesome mode of preparing it for them, is
to open it as little as possible when it is cleansed, to leave the skin
on, to dry the fish well, and to broil it gently. It should be sent very
hot to table, and will require no sauce: twenty minutes will usually be
required to cook it, if of moderate size.
The haddock is sometimes very large. We have had it occasionally
from our southern coast between two and three feet in length, and it
was then remarkably good when simply boiled, even the day after it
was caught, the white curd between the flakes of flesh being like that
of extremely fresh salmon. As it is in full season in mid-winter, it can
be sent to a distance without injury. It is a very firm fish when large
and in season; but, as purchased commonly at inland markets, is
often neither fine in size nor quality. One of the best modes of
cooking it is, to take the flesh entire from the bones, to divide it, dip it
into egg and bread-crumbs, mixed with savoury herbs finely minced,
and a seasoning of salt and spice, and to fry it like soles. Other
receipts for it will be found in the body of this chapter.
The flesh of the gurnard is exceedingly dry, and somewhat over
firm, but when filled with well-made forcemeat and gently baked, it is
much liked by many persons. At good tables, it is often served in
fillets fried or baked, and richly sauced: in common cookery it is
sometimes boiled.
Portions only of the skate, which is frequently of enormous size,
are used as food: these are in general cut out by the fisherman or by
the salesman, and are called the wings. The flesh is commonly
served here divided into long narrow fillets, called crimped skate,
which are rolled up and fastened, to preserve them in that form,
while they are cooked. In France, it is sent to table raised from the
bones in large portions, sauced with beurre-noir (burned or browned
butter), and strewed with well-crisped parsley.
Trout, which is a delicious fish when stewed in gravy, either quite
simply, or with the addition of wine and various condiments, and
which when of small size is very sweet and pleasant, eating nicely
fried, is poor and insipid when plainly boiled.[43]
43. We have been informed by Mr. Howitt, the well-known author of several
highly interesting works on Germany, that this fish, when boiled the instant it
was caught—as he had eaten it often on the banks of some celebrated
German trout-streams—was most excellent, especially when it was of large
size; but, as a general rule, almost any other mode of cooking is to be
recommended for it in preference.

Pike, of which the flesh is extremely dry, is we think better baked


than dressed in any other way; but it is often boiled.
Carp should either be stewed whole in the same manner as trout,
or served cut in slices, in a rich sauce called a matelote.
Smelts, sand-eels, and white-bait, are always fried; the last two
sometimes after being dipped into batter.
THE BEST MODE OF BOILING FISH.

We have left unaltered in the following receipts the greater number


of our original directions for boiling fish, which were found when
carefully followed, to produce a good result; but Baron Liebeg and
other scientific writers explain clearly the principles on which the
nutriment contained in fish or flesh is best retained by bringing the
surface of either when it is cooked, into immediate contact with
boiling water; and then (after a few minutes of ebullition) lowering the
temperature by the addition of cold water, and keeping it somewhat
below the boiling point for the remainder of the process. This method
is at least worthy of a trial, even if it be attended with a slight degree
more of trouble than those in general use; but when fish is served
with a variety of other dishes, the escape of some portion of its
nutritious juices is of less importance than when it forms the principal
food of any part of the community: in that case, the preservation of
all the nourishment which can be derived from it, is of real
consequence.
Directions.—Throw into as much water as will cover the fish
entirely, a portion of the salt which is to be added in cooking it, and
when it boils quickly take off the scum, lay in the fish, and let it boil
moderately fast from three to ten minutes, according to its weight
and thickness; then pour in as much cold water as there is of the
boiling, take out a part, leaving sufficient only to keep the fish well
covered until it is ready to serve; add the remainder of the salt, draw
the fish-kettle to the side of the fire, and keep the water just
simmering, and no more, until the fish is done.
The cook will understand that if a gallon of water be required to
cover the fish while it is cooking, that quantity must be made to boil;
and that a gallon of cold must be added to it after the fish has been
laid in, and kept boiling for a very few minutes. For example:—A
large turbot or cod’s head for ten minutes; a moderate-sized plaice
or John Dory, about five; and whitings, codlings, and other small fish,
from three to four minutes. That one gallon must then be taken out of
the kettle, which should immediately be drawn from the fire, and
placed at the side of the stove, that the fish may be gradually heated
through as the water is brought slowly to the point of simmering.
The whole of the salt may be added after a portion of the water is
withdrawn, when the cook cannot entirely depend on her own
judgment for the precise quantity required.
Obs.—This is the best practical application that we can give of
Baron Liebeg’s instructions.
BRINE FOR BOILING FISH.

Fish is exceedingly insipid if sufficient salt be not mixed with the


water in which it is boiled, but the precise quantity required for it will
depend, in some measure, upon the kind of salt which is used. Fine
common salt is that for which our directions are given; but when the
Maldon salt, which is very superior in strength, as well as in other
qualities, is substituted for it, a smaller quantity must be allowed.
About four ounces to the gallon of water will be sufficient for small
fish in general; an additional ounce, or rather more, will not be too
much for cod fish, lobsters, crabs, prawns, and shrimps; and salmon
will require eight ounces, as the brine for this fish should be strong:
the water should always be perfectly well skimmed from the moment
the scum begins to form upon the surface.
Mackerel, whiting, and other small fish, 4 ounces of salt to a gallon
of water. Cod fish, lobsters, crabs, prawns, shrimps, 5 to 6 oz.
Salmon, 8 ozs.
TO RENDER BOILED FISH FIRM.

Put a small bit of saltpetre with the salt into the water in which it is
boiled: a quarter of an ounce will be sufficient for a gallon.
TO KNOW WHEN FISH IS SUFFICIENTLY BOILED, OR
OTHERWISE COOKED.

If the thickest part of the flesh separates easily from the back-
bone, it is quite ready to serve, and should be withdrawn from the
pan without delay, as further cooking would be injurious to it. This
test can easily be applied to a fish which has been divided, but when
it is entire it should be lifted from the water when the flesh of the tail
breaks from the bone, and the eyes loosen from the head.
TO BAKE FISH.

A gentle oven may be used with advantage, for cooking almost


every kind of fish, as we have ascertained from our own observation;
but it must be subjected to a mild degree of heat only. This
penetrates the flesh gradually, and converts it into wholesome
succulent food; whereas, a hot oven evaporates all the juices rapidly,
and renders the fish hard and dry. When small, they should be
wrapped in oiled or buttered paper before they are baked; and when
filleted, or left in any other form, and placed in a deep dish with or
without any liquid before they are put into the oven, a buttered paper
should still be laid closely upon them to keep the surface moist.
Large pieces of salmon, conger eel, and other fish of considerable
size are sometimes in common cookery baked like meat over
potatoes pared and halved.
FAT FOR FRYING FISH.

This, whether it be butter, lard, or oil should always be excellent in


quality, for the finest fish will be rendered unfit for eating if it be fried
in fat that is rancid. When good, and used in sufficient quantity, it will
serve for the same purpose several times, if strained after each
frying, and put carefully away in a clean pan, provided always that it
has not been smoked nor burned in the using.
Lard renders fish more crisp than butter does; but fresh, pure
olive-oil (salad oil, as it is commonly called in England) is the best
ingredient which can be used for it, and as it will serve well for the
same purpose, many times in succession, if strained and carefully
stored as we have already stated, it is not in reality so expensive as
might be supposed for this mode of cooking. There should always be
an ample quantity of it (or of any other friture[44]) in the pan, as the
fish should be nearly covered with it, at the least; and it should cease
to bubble before either fish or meat is laid into it, or it will be too
much absorbed by the flesh, and will impart neither sufficient
firmness, nor sufficient colour.
44. The French term for fat of all kinds used in frying.
TO KEEP FISH HOT FOR TABLE.

Never leave it in the water after it is done, but if it cannot be sent


to table as soon as it is ready to serve, lift it out, lay the fish-plate into
a large and a very hot dish, and set it across the fish-kettle; just dip a
clean cloth into the boiling water, and spread it upon the fish, place a
tin cover over it, and let it remain so until two or three minutes before
it is wanted, then remove the cloth, and put the fish back into the
kettle for an instant that it may be as hot as possible: drain, dish, and
serve it immediately: the water should be kept boiling the whole time.
TO BOIL A TURBOT.

[In season all the year.]


A fine turbot, in full season, and well
served, is one of the most delicate and
delicious fish that can be sent to table; but it
is generally an expensive dish, and its
excellence so much depends on the
manner in which it is dressed, that great
care should be taken to prepare it properly.
After it is emptied, wash the inside until it is
perfectly cleansed, and rub lightly a little
Turbot.
fine salt over the outside, as this will render
less washing and handling necessary, by at
once taking off the slime; change the water several times, and when
the fish is as clean as it is possible to render it, draw a sharp knife
through the thickest part of the middle of the back nearly through to
the bone.[45] Never cut off the fins of a turbot when preparing it for
table, and remember that it is the dark side of the fish in which the
incision is to be made, to prevent the skin of the white side from
cracking. Dissolve in a well-cleaned turbot or common fish-kettle, in
as much cold spring water as will cover the fish abundantly, salt, in
the proportion of four ounces to the gallon; wipe the fish-plate with a
clean cloth, lay the turbot upon it with the white side upwards, place
it in the kettle, bring it slowly to boil, and clear off the scum
thoroughly as it rises. Let the water only just simmer until the fish is
done, then lift it out, drain, and slide it gently on to a very hot dish,
with a hot napkin neatly arranged over the drainer. Send it
immediately to table with rich lobster sauce and good plain melted
butter. For a simple dinner, anchovy or shrimp sauce is sometimes
served with a small turbot. Should there be any cracks in the skin of
the fish, branches of curled parsley may be laid lightly over them, or
part of the inside coral of a lobster, rubbed through a fine hair-sieve,
may be sprinkled over the fish; but it’s better without either, when it is
very white and unbroken. When garnishings are in favour, a slice of
lemon and a tuft of curled parsley, may be placed alternately round
the edge of the dish. A border of fried smelts or of fillets of soles,
was formerly served round a turbot, and is always a very admissible
addition, though no longer so fashionable as it was. From fifteen to
twenty minutes will boil a moderate-sized fish, and from twenty to
thirty a large one; but as the same time will not always be sufficient
for a fish of the same weight, the cook must watch it attentively, and
lift it out as soon as its appearance denotes its being done.
45. This is the common practice even of the best cooks, but is very unscientific
nevertheless. When the incision is made really into the flesh the turbot
should be cooked altogether on Liebeg’s plan, for which see “The Best Mode
of Boiling Fish,” in the preceding pages.

Moderate sized turbot, 15 to 20 minutes. Large, 20 to 30 minutes.


Longer, if of unusual size.
Obs.—A lemon gently squeezed, and rubbed over the fish, is
thought to preserve its whiteness. Some good cooks still put turbot
into boiling water, and to prevent its breaking, tie it with a cloth tightly
to the fish-plate.

You might also like