You are on page 1of 15

Available online at www.sciencedirect.

com
Available online at www.sciencedirect.com
ScienceDirect
ScienceDirect
Available online at www.sciencedirect.com
Procedia Computer Science 00 (2022) 000–000
Procedia Computer Science 00 (2022) 000–000 www.elsevier.com/locate/procedia
ScienceDirect www.elsevier.com/locate/procedia
Procedia Computer Science 207 (2022) 754–768

26th International Conference on Knowledge-Based and Intelligent Information & Engineering


26th International Conference on Knowledge-Based
Systems (KES 2022)
and Intelligent Information & Engineering
Systems (KES 2022)
Probabilistic Properties of Deterministic
Probabilistic Properties Quantizers
and Randomized of Deterministic
and Randomized Quantizers
Elżbieta Kaweckaa*, Jerzy Podhajeckia
Elżbieta Kaweckaa*, Jerzy Podhajeckia
a
The Jacob of Paradies University, Faculty of Technology, Teatralna 25, 66-400 Gorzów Wielkopolski, Poland
a
The Jacob of Paradies University, Faculty of Technology, Teatralna 25, 66-400 Gorzów Wielkopolski, Poland

Abstract
Abstract
The characteristics of the quantization process are one of the major factors determining the precision of a
measurement in a channel
The characteristics of with an A/D converter.
the quantization The
process areproperties
one of theof this process
major are determining
factors changed by the the modification
precision of ofa
the probabilistic
measurement in acharacteristics
channel with anof the
A/Dquantizer
converter.with
Thethe aim of obtaining
properties the desired
of this process properties
are changed of modification
by the the error of the
of
quantization process.
the probabilistic The presentofpaper
characteristics presents the
the quantizer withstate-of-the
the aim ofart in the area
obtaining the as well asproperties
desired some original
of thecontributions
error of the
by the authors,
quantization including
process. Thethe analysis
present of presents
paper the quantizer randomizedartbyinGauss
the state-of-the andasSimpson
the area distributed
well as some random
original signal.
contributions
by the authors, including the analysis of the quantizer randomized by Gauss and Simpson distributed random signal.
©
© 2022
2022 The
The Authors.
Authors. Published
Published by
by Elsevier
ELSEVIER B.V.B.V.
This
This is
is an
an open
open access
access article under
article under the
the CC
CC BY-NC-ND
BY-NC-ND license
license (https://creativecommons.org/licenses/by-nc-nd/4.0)
(https://creativecommons.org/licenses/by-nc-nd/4.0)
© 2022 The Authors.
Peer-review Published byof
under responsibility ELSEVIER B.V.committee
the scientific of the 26th International Conference on Knowledge-Based and
Peer-review under responsibility of the scientific committee of KES International
Intelligent Information & Engineering Systems (KES 2022) (https://creativecommons.org/licenses/by-nc-nd/4.0)
This is an open access article under CC BY-NC-ND license
Keywords:
Peer-review Type yourresponsibility
under keywords here,of
separated by semicolons
the scientific ;
committee of KES International
Keywords: Type your keywords here, separated by semicolons ;

1. Introduction
1. Introduction
The dynamic progress in measurement technology has been accompanied by the development of new signal
quantization
The dynamic methods and the
progress in improvement
measurement of establishedhas
technology ones, which
been improves the
accompanied measurement
by the development precision
of new . signal
One of them
quantization is randomized
methods quantization.ofThe
and the improvement former may
established bewhich
ones, thought of as a the
improves programming
measurement method of modifying
precision .
theOne
quantization
of them is process. This type
randomized of quantization
quantization. is based
The former on be
may introducing
thought ofa specific random signal
as a programming into of
method themodifying
quantizer
which
the changes itsprocess.
quantization probabilistic properties
This type [4].
of quantization is based on introducing a specific random signal into the quantizer
Zamir
which et al.,its[1]
changes presented the
probabilistic easy processing
properties [4]. for universal coding with distortion of a source that may take
continuously
Zamir et al.,many [1]values. The the
presented rateeasy
of this universal for
processing coding scheme
universal was tested,
coding and a general
with distortion of aexpression
source thatwasmay derived
take
for it. Berndtmany
continuously et al.,values.
[2] proposed
The rate aofdifferentially
this universalrandomized
coding scheme quantization
was tested,scheme that is expression
and a general optimal in wasthe derived
case of
quantization
for it. Berndterrors
et al.,becoming uncorrelated
[2] proposed to the input
a differentially signal and having
randomized a constant
quantization variance
scheme that that is entirely
is optimal determined
in the case of
by the quantizer
quantization resolution.
errors becomingChiorboli [3] presented
uncorrelated thesignal
to the input recovering the mean
and having value variance
a constant and the variance of an input
that is entirely signal
determined
from
by thethe output resolution.
quantizer data of a Chiorboli
granular quantizer.
[3] presented Known results were
the recovering adopted
the mean valueto and
random Gaussian
the variance of and uniformly
an input signal
distributed
from signalsdata
the output and of
to deterministic sinusoidal Known
a granular quantizer. signals. Akyol
results and
wereRose [5] introduced
adopted to random theoretical
Gaussianresults of a direct
and uniformly
distributed signals and to deterministic sinusoidal signals. Akyol and Rose [5] introduced theoretical results of a direct
1877-0509 © 2022 The Authors. Published by ELSEVIER B.V.
This is an open
1877-0509 access
© 2022 Thearticle under
Authors. the CC BY-NC-ND
Published by ELSEVIER license
B.V.(https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under
This is an open responsibility
access of the
article under the scientific
CC BY-NC-NDcommittee of KES
license International
(https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of KES International
1877-0509 © 2022 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the 26th International Conference on Knowledge-Based and Intelligent
Information & Engineering Systems (KES 2022)
10.1016/j.procs.2022.09.131
Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768 755
2 Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2022) 000–000

and simple connection between the optimal constrained quantizers and their unconstrained counterparts. Numerical
results for the Gaussian source showed that the proposed constrained randomized quantizer outperforms the
conventional dithered quantizer, as well as the constrained deterministic quantizer. Choi et al., [19] presented lossy
compression of deep neural networks (DNNs) by weight quantization and lossless source coding for memory-efficient
deployment and introduced universal DNN compression by universal vector quantization and universal source coding.
In particular, the universal randomized lattice quantization of DNNs, which randomizes DNN weights by uniform
random dithering before lattice quantization and can perform near-optimally on any source without relying on
knowledge of its probability distribution was shown. Backstrom et al. [20] proposed a hybrid coding approach where
low-energy samples are quantized using dithering, instead of the conventional uniform quantizer. For dithering, we
apply 1 bit quantization in a randomized sub-space. Further it was showed that the output energy can be adjusted to
the desired level using a scaling parameter. Objective measurements and listening tests demonstrated the advantages
of the proposed methods.
Another method, dither quantization is a separate method of modifying the measurement reliability which does not
belong neither to the programming methods nor to the system methods. [8, 10]. The dithering technique is based on
quantizing the measured signal in the presence of a random signal added to the signal being quantized before the A/D
conversion. This quantization approach was actively developed in the 1990’s because of the low resolution of available
quantizers. Gray and Stockham [6] published the new proofs that were based on elementary Fourier series and Rice's
characteristic function method and do not need the utilization of generalized functions and sampling theorem
arguments and provided a unified derivation and presentation of the two forms of dithered quantizer noise based on
elementary Fourier techniques. Wagdy [7] published an investigation of the quantization error of a quantizer (ideal
A/D converter). Correlation between quantization error and quantizer input was considered. Spectra of the average
quantization error, corresponding to an arbitrary input signal, was tested. Different dither forms (Gaussian, uniform,
and discrete) were compared. Wagdy also introduced the investigation into the effects of additive dither on nonideal
ADC's [9], i.e., ADC's with nonlinearity errors. Effects of dither for different error levels and bit orders were tested,
and the added resolution was calculated and explored the effects of dither on ADC's with errors in more than one bit.
Molev-Shteiman et al., [18] proposed an original equivalent model for a quantizer with noisy input (the desired signal
corrupted by measurement noise) and introduced the quantizer output as a sum of the desired signal after it passes
through a nonlinear element with a known equivalent transfer function and an equivalent additive white noise. The
equivalent transfer function adopts the form of a conditional expectation of the quantizer output given the desired
signal portion of its input.
These days the A/D converters with dither do not constitute a major proportion of the quantizer market but are still
available from manufacturers [11-13].
Issues concerning quantizer randomization in the studied literature are not analyzed as vastly as it is proposed by
the authors. A new approach has been introduced in the analysis of quantization method.
In this paper we use randomized quantization to reduce the negative effect of quantization. Randomization of a
quantizer results in a new distribution of quantization error. Using a correctly adjusted quantization error distribution
may lead to the reduction or even elimination of quantization error. We present a discussion on quantizer
randomization using signals with various probability distributions. In particular we discuss Gauss and Simpson
distributed random signals. The densities and distribution parameters of these randomizations have been established
and discussed. We have also demonstrated how randomized quantization can be related to dithered signal quantization
and compared the effects of both approaches. The rationale for taking up this subject was the observation that the
estimation of the density and the quantization error distribution parameters with the use of quantized data may lead to
ambiguous results. This happens when the size of the dataset is small, the datasets are incomplete, there is no
possibility to acquire data in reproducible measurement conditions, the quantizer loses the output codes or has a low
resolution.

2. Deterministic and randomized quantizers

Quantization is an operation which makes it possible to transform an original variable x into a quantized variable
xq which is discretely valued, and its levels correspond to the quantization intervals. Most often the quantization
756 Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768
Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2019) 000–000 3

intervals have equal length q, which is also called the quantization step, and the quantization operation consists in
assigning to x the closest xq value. This corresponds to the mathematical operation of rounding.
Quantization may be deterministic [14] or randomized [4]. In a deterministic rounding quantizer the quantization
levels are set. If x is the variable which is being quantized, then the result is xq usually determined by:

𝑥𝑥 1
𝑥𝑥𝑞𝑞 = 𝑞𝑞 ⋅ [ + ], (1)
𝑞𝑞 2

where [] is the round function.

Quantization entails an inevitable error:

𝑒𝑒𝑞𝑞 = 𝑥𝑥𝑞𝑞 − 𝑥𝑥, (2)

called the quantization error. For the deterministic quantizer eq is equivalent to the quantization noise q. The q noise
is the difference between the quantized and original value of the variable. This type of quantization is the subject of
the Widrow quantization theorem [14].
In the randomized quantizer the set of levels with which the input signal is being compared is random. The
randomization is obtained by introducing a random signal with suitable characteristics (Fig. 1). The nomenclature
used for this technology follows from this fact [4]. The quantizer outputs the following variable:

𝑥𝑥
𝑥𝑥𝑞𝑞 = 𝑞𝑞 ⋅ [ + 1 − 𝛩𝛩], (3)
𝑞𝑞

where  is the parameter assuming the values of the random signal introduced to the quantizer. For the randomized
quantizer the eq error is calculated with the use of (2) and is equivalent to the quantization noise q.

Random 
signal

Q
x xq

Fig. 1. Randomized quantizer.

This method of quantization may be considered as quantization with dither with the use of the following dither signal:

1
𝑑𝑑 = 𝑞𝑞  ( − 𝛩𝛩), (4)
2

which is added to the input variable x and quantized together with this value (Fig. 2).
Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768 757
4 Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2022) 000–000

Random
signal d
 Q
x y yq

Fig. 2. Quantization with added dither.

Moreover, the properties of a randomized quantizer are then identical to those of a deterministic quantizer with a
sum of an input signal and an appropriately defined dither signal. This can be show in the following way:

𝑦𝑦 1 𝑥𝑥+𝑑𝑑 1 𝑥𝑥 1 1 1 𝑥𝑥
𝑦𝑦𝑞𝑞 = 𝑞𝑞 ⋅ [ + ] = 𝑞𝑞 ⋅ [ + ] = 𝑞𝑞 ⋅ [ + (𝑞𝑞 ( − 𝛩𝛩)) + ] = 𝑞𝑞 ⋅ [ + 1 − 𝛩𝛩]. (5)
𝑞𝑞 2 𝑞𝑞 2 𝑞𝑞 𝑞𝑞 2 2 𝑞𝑞

In the case of quantization with dither signal, the error q is determined by the ratios [8]:

𝑒𝑒𝑞𝑞 = 𝑦𝑦𝑞𝑞 − 𝑦𝑦 = 𝑦𝑦𝑞𝑞 − (𝑥𝑥 + 𝑑𝑑). (6)

Similarly to randomized quantization, dithering requires the Since 𝜀𝜀𝑞𝑞 = 𝑦𝑦𝑞𝑞 − 𝑥𝑥, eq is not equivalent to the noise
q, but rather 𝜀𝜀𝑞𝑞 = 𝑒𝑒𝑞𝑞 + 𝑑𝑑.
Application of averaging of the quantized data, and the best results are usually obtained by employing a Gaussian
signal. It should be stressed that so far papers dealing with the A/D conversion have mostly focused on the description
of the application of this technique to the reduction of error eq, whose magnitude is significant in the case of low-
resolution quantizers. However, the authors of this paper aim at changing the probabilistic properties of the quantizer
in such a way as to be able to obtain new forms of the eq error distribution. The application of a multi-bit quantizer, in
which the signal being quantized passes through many quantization levels, guarantees obtaining the correct form of
the eq error distribution.
Figs. 3 and 4 illustrate the processes of deterministic and randomized quantization.
x
5q
4q
3q
2q
q
0
εq t
q/2
-q/2 t

Fig. 3. Deterministic quantization.
758 Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768
Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2019) 000–000 5

x
5q
4q
3q
2q
q
0
εq t
q

t
-q

Fig. 4. Randomized quantization.

It should be noticed that in the case of the deterministic quantizer, the eq error assumes values from the interval [-q/2,
q/2]. In the case of the randomized quantizer it is assumed that the eq error takes values from the interval [-q, q]. The
randomization of a quantizer results not only in an extension of the interval of the eq error, but also in an alteration of
the form of the probability density function of this quantity.

3. The analysis of the properties of the deterministic and randomized quantizers

3.1. Deterministic quantization

Since x and xq are random variables, eq is also a random variable. Snyder and Sripad proved that if the random
variables x and eq are mutually dependent, the probability density function 𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) of the random variable eq can be
expressed by the following formula [15]:
2𝜋𝜋
2𝜋𝜋 𝑗𝑗( 𝑖𝑖)𝑥𝑥
𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) = 𝑔𝑔(𝑥𝑥) (1 + ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) 𝑒𝑒 𝑞𝑞 ), (7)
𝑞𝑞
𝑖𝑖≠0

where𝑗𝑗 = √−1and:
1 𝑞𝑞
for |𝑥𝑥| ≤ ,
𝑔𝑔(𝑥𝑥) = { 𝑞𝑞 2
𝑞𝑞 (8)
0 for |𝑥𝑥| > ,
2

is the probability density of the uniform distribution in the interval [-q/2, q/2] (Fig. 5), while x() is the
characteristic function of the random variable x [16].
Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768 759
6 Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2022) 000–000

g ( x)
1/q

-q/2 q/2 x

Fig. 5. The probability density distribution of the uniform distribution in the [-q/2, q/2] interval.

Using (7), the characteristic function of the random variable eq can be determined [14]:

+∞ 2𝜋𝜋 𝑞𝑞𝑞𝑞
𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣) = ∫−∞ 𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥)𝑒𝑒 𝑗𝑗𝑗𝑗𝑗𝑗 𝑑𝑑𝑑𝑑 = ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) sinc ( + 𝜋𝜋𝜋𝜋), (9)
𝑞𝑞 2

where:

𝑠𝑠𝑠𝑠𝑠𝑠(𝑧𝑧)
for 𝑧𝑧 ≠ 0,
sinc(𝑧𝑧) = { 𝑧𝑧 (10)
1 for 𝑧𝑧 = 0,

is the interpolating function [17].


Calculating the first and second derivatives of the (9), we obtain the moments of the random variable eq [14]:

𝑑𝑑 𝑞𝑞 2𝜋𝜋 (−1)𝑖𝑖
𝜇𝜇𝑒𝑒𝑞𝑞 = 𝐸𝐸[𝑒𝑒𝑞𝑞 ] = −𝑗𝑗 𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣)| = ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) , (11)
𝑑𝑑𝑑𝑑 𝑣𝑣=0 𝑗𝑗2𝜋𝜋 𝑞𝑞 𝑖𝑖
𝑖𝑖≠0

𝑑𝑑 2 𝑞𝑞 2 𝑞𝑞 2 2𝜋𝜋 (−1)𝑖𝑖
𝑒𝑒𝑞𝑞2 = 𝐸𝐸[𝑒𝑒𝑞𝑞2 ] = − 𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣)| = + ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) . (12)
𝑑𝑑𝑣𝑣 2 𝑣𝑣=0 12 2𝜋𝜋2 𝑞𝑞 𝑖𝑖 2
𝑖𝑖≠0

Quantities (11) and (12) are, respectively, the mean and mean square of the random variable eq.
The variance of the random variable eq is calculated from the formula:

𝑞𝑞 2 𝑞𝑞 2 2𝜋𝜋 (−1)𝑖𝑖 𝑞𝑞 2 2𝜋𝜋 2𝜋𝜋 (−1)𝑖𝑖+𝑘𝑘


𝑉𝑉𝑉𝑉𝑉𝑉[𝑒𝑒𝑞𝑞 ] = 𝐸𝐸[𝑒𝑒𝑞𝑞2 ] − 𝐸𝐸 2 [𝑒𝑒𝑞𝑞 ] = + ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) ++ ∑∞ ∞
𝑖𝑖=−∞ ∑𝑘𝑘=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) 𝛷𝛷𝑥𝑥 ( 𝑘𝑘) . (13)
12 2𝜋𝜋2 𝑞𝑞 𝑖𝑖 2 4𝜋𝜋2 𝑞𝑞 𝑞𝑞 𝑖𝑖𝑖𝑖
𝑖𝑖≠0 𝑖𝑖≠0 𝑘𝑘≠0

If the random variables x and eq are uncorrelated, which corresponds to x(v)=0, v=2i/q, i=1, 2, … , function
𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) assumes the form (8). Furthermore [14]:

𝑞𝑞𝑞𝑞
𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣) = sinc ( ), (14)
2

and:

𝜇𝜇𝑒𝑒𝑞𝑞 (𝑣𝑣) = 0, (15)

𝑞𝑞 2
𝑉𝑉𝑉𝑉𝑉𝑉[𝑒𝑒𝑞𝑞 ] = 𝑒𝑒𝑞𝑞2 = . (16)
12

Using (14) the k-th moment of the random variable eq can be calculated:
760 Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768
Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2019) 000–000 7

𝑑𝑑 𝑘𝑘 1 1+(−1)𝑘𝑘
𝐸𝐸[𝑒𝑒𝑞𝑞𝑘𝑘 ] = (−𝑗𝑗)𝑘𝑘 𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣)| = 𝑞𝑞 𝑘𝑘 , kN\{0}. (17)
𝑑𝑑𝑣𝑣 𝑘𝑘 𝑣𝑣=0 2𝑘𝑘+1 𝑘𝑘+1

3.2. Quantization randomized with uniformly distributed random signal

Let  be a random variable with the uniform distribution over the interval [0, 1]. Let us consider a new random
variable d with the density:

1 𝑞𝑞
for |𝑥𝑥| ≤ ,
𝑞𝑞 2
𝑓𝑓𝑑𝑑 (𝑥𝑥) = { 𝑞𝑞 (18)
0 for |𝑥𝑥| > ,
2

obtained from the transformation of the random variable  The random variable d has the uniform distribution
over the interval [-q/2, q/2].
As a result of the convolution operation:
𝑞𝑞
∫−𝑞𝑞 𝑔𝑔(𝑢𝑢)𝑓𝑓𝑑𝑑 (𝑥𝑥 − 𝑢𝑢)𝑑𝑑𝑑𝑑, (19)

of the functions (8) and (18) we get a new form of the function (Fig. 6) [3]:

1 |𝑥𝑥|
− for |𝑥𝑥| ≤ 𝑞𝑞,
𝑔𝑔(𝑥𝑥) = {𝑞𝑞 𝑞𝑞 2 (20)
0 for |𝑥𝑥| > 𝑞𝑞,

which according to (7) we use to determine the probability density function 𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) of the random variable eq.
Function (20) is the convolution of two uniform distributions over the same interval [-q/2, q/2]. It is also the
Simpson distribution over the interval [-q, q] [16]. The random variable eq assumes values from the interval
[-q, q].

g ( x)
1/q

-q q x

Fig. 6. The probability density function of the Simpson distribution in the interval [-q, q].

The characteristic function of the random variable eq can be defined by the following formula [3]:

2𝜋𝜋 𝑞𝑞𝑞𝑞
𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣) = ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) sinc2 ( + 𝜋𝜋𝜋𝜋). (21)
𝑞𝑞 2

The mean and mean square of the random variable eq are:

𝜇𝜇𝑒𝑒𝑞𝑞 = 0, (22)
Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768 761
8 Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2022) 000–000

𝑞𝑞 2 𝑞𝑞 2 2𝜋𝜋 1
𝑒𝑒𝑞𝑞2 = − ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) 2 . (23)
6 2𝜋𝜋2 𝑞𝑞 𝑖𝑖
𝑖𝑖≠0

The variance of the random variable eq is equal to the mean square of this quantity:

𝑉𝑉𝑉𝑉𝑉𝑉[𝑒𝑒𝑞𝑞 ] = 𝑒𝑒𝑞𝑞2 . (24)

If x(v)=0, v=2i/q, i=1, 2, … , the function 𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) assumes the form (20). Furthermore:

𝑞𝑞𝑞𝑞
𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣) = sinc 2 ( ), (25)
2

𝑞𝑞 2
𝑉𝑉𝑉𝑉𝑉𝑉[𝑒𝑒𝑞𝑞 ] = 𝑒𝑒𝑞𝑞2 = . (26)
6

Using (25), the k-th moment of the random variable eq can be determined:

1+(−1)𝑘𝑘
𝐸𝐸[𝑒𝑒𝑞𝑞𝑘𝑘 ] = 𝑞𝑞 𝑘𝑘 (𝑘𝑘+1)(𝑘𝑘+2). (27)

In this case, the effects of randomized quantization correspond to quantization with a uniformly distributed dither
signal with the amplitude q/2. Dither with this amplitude is used to reduce the nonlinear distortions in A/D converters
[8].

3.3. Quantization randomized with the Simpson-distributed random signal

Let  be a random variable with the Simpson distribution over the interval [0, 1]. Let us consider a new random
variable d with the density:

2 4|𝑥𝑥| 𝑞𝑞
− for |𝑥𝑥| ≤ ,
𝑞𝑞 𝑞𝑞 2 2
𝑓𝑓𝑑𝑑 (𝑥𝑥) = { 𝑞𝑞
(28)
0 for |𝑥𝑥| > ,
2

resulting from the transformation of the random variable . The random variable d has the Simpson distribution
over the interval [-q/2, q/2].
As a result of the convolution operation of the functions (8) and (28), we obtain a new form of the function (Fig. 7):

𝑞𝑞 2 −2𝑥𝑥 2 𝑞𝑞
for |𝑥𝑥| < ,
𝑞𝑞 3 2
𝑔𝑔(𝑥𝑥) = 2(𝑞𝑞−|𝑥𝑥|)2 𝑞𝑞
for |𝑥𝑥| ≥ ∧ |𝑥𝑥| ≤ 𝑞𝑞, (29)
𝑞𝑞 3 2
{ 0 for |𝑥𝑥| > 𝑞𝑞,

which according to (7) is applied to determine the probability density function 𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) of the random variable
eq. Function (29) is the convolution of the uniform distribution over the interval [-q/2, q/2] and the Simpson
distribution over the interval [-q/2, q/2].

The random variable eq assumes values from the interval [-q, q].
762 Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768
Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2019) 000–000 9

g ( x)
1/q

-q q x

Fig. 7. Convolution of the uniform distribution in the interval [-q/2, q/2] and the Simpson distribution in the interval [-q/2, q/2].

The characteristic function of the random variable eq is given by:

2𝜋𝜋 𝑞𝑞𝑞𝑞 𝑞𝑞𝑞𝑞 𝜋𝜋


𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣) = ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) sinc ( + 𝜋𝜋𝜋𝜋) sinc2 ( + 𝑖𝑖). (30)
𝑞𝑞 2 4 2

The mean and mean square of the random variable eq are:

𝑞𝑞 2𝜋𝜋 (−1)𝑖𝑖 −1
𝜇𝜇𝑒𝑒𝑞𝑞 = ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) , (31)
𝑗𝑗𝜋𝜋3 𝑞𝑞 𝑖𝑖 3
𝑖𝑖≠0

𝑞𝑞 2 3𝑞𝑞 2 2𝜋𝜋 (−1)𝑖𝑖 −1


𝑒𝑒𝑞𝑞2 = + ∑∞𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) . (32)
8 𝜋𝜋4 𝑞𝑞 𝑖𝑖 4
𝑖𝑖≠0

The variance of the random variable eq is calculated from the formula:

𝑞𝑞 2 3𝑞𝑞 2 2𝜋𝜋 (−1)𝑖𝑖 −1 𝑞𝑞 2 2𝜋𝜋 2𝜋𝜋 ((−1)𝑖𝑖 −1)((−1)𝑘𝑘 −1)


𝑉𝑉𝑉𝑉𝑉𝑉[𝑒𝑒𝑞𝑞 ] = + ∑∞𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) + ∑∞𝑖𝑖=−∞ ∑∞𝑘𝑘=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) 𝛷𝛷𝑥𝑥 ( 𝑘𝑘) . (33)
8 𝜋𝜋4 𝑞𝑞 𝑖𝑖 4 𝜋𝜋6 𝑞𝑞 𝑞𝑞 𝑖𝑖 3 𝑘𝑘 3
𝑖𝑖≠0 𝑖𝑖≠0 𝑘𝑘≠0

For x(v)=0, v=2i/q, i=1, 2, … , the function 𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) assumes the form (29). Additionally:

𝑞𝑞𝑞𝑞 𝑞𝑞𝑞𝑞
𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣) = sinc ( ) sinc 2 ( ), (34)
2 4

𝜇𝜇𝑒𝑒𝑞𝑞 = 0, (35)

𝑞𝑞 2
𝑉𝑉𝑉𝑉𝑉𝑉[𝑒𝑒𝑞𝑞 ] = 𝑒𝑒𝑞𝑞2 = . (36)
8

On the basis of (34) the k-th moment of the random variable eq can be determined:

1+(−1)𝑘𝑘
𝐸𝐸[𝑒𝑒𝑞𝑞𝑘𝑘 ] = 𝑞𝑞 𝑘𝑘 (4 − 2−𝑘𝑘 ) (𝑘𝑘+1)(𝑘𝑘+2)(𝑘𝑘+3). (37)

In this case, the effects of randomized quantization correspond to quantization with a Simpson distributed dither
signal with the amplitude q/2.

3.4. Quantization randomized with the Gauss-distributed random signal


1 1
Let  be a Gaussian random variable with the parameters = 6 and = 2 . Let us assume that d is a new random
variable with the density:
Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768 763
10 Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2022) 000–000

18𝑥𝑥2
6 −
𝑓𝑓𝑑𝑑 (𝑥𝑥) = 𝑒𝑒 𝑞𝑞2 , (38)
𝑞𝑞√2𝜋𝜋

resulting from the transformation of the random variable . The random variable d has the Gaussian distribution with
the parameters d=q/6 and d=0. The form of the distribution follows from inputting a Gaussian signal with the
parameters  and  into the quantizer. According to the 3-sigma rule, such a signal will assume the values from the
interval [0, 1] with probability 99.7% [16].
As a result of the convolution operation of the functions (8) and (38) we obtain a new form of the function (Fig. 8):

1 3(𝑞𝑞−2𝑥𝑥) 3(𝑞𝑞+2𝑥𝑥)
(erf ( ) + erf ( )) for |𝑥𝑥| ≤ 𝑞𝑞,
𝑔𝑔(𝑥𝑥) = {2𝑞𝑞 √2𝑞𝑞 √2𝑞𝑞 (39)
0 for |𝑥𝑥| > 𝑞𝑞,

which, according to (7), is used to determine the probability density function 𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) of the random variable eq.
Function (39) is the convolution of the uniform distribution over the interval [-q/2, q/2] and the Gaussian distribution
with the parameters d and d. The function erf() is the Gaussian error function [17]. The random variable eq assumes
values from the interval [-q, q].

g ( x)
1/q

-q q x

Fig. 8. Convolution of the uniform distribution in the interval [-q/2, q/2] and the Gaussian distribution with parameters d i d.

The characteristic function of the random variable eq is given by the formula:


1
2𝜋𝜋 𝑞𝑞𝑞𝑞 (𝑞𝑞𝑞𝑞+2𝜋𝜋𝜋𝜋)2
𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣) = ∑∞𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) sinc ( + 𝜋𝜋𝜋𝜋) 𝑒𝑒 −72 . (40)
𝑞𝑞 2

The characteristic function of the random variable eq is given by the formula:


1 2 𝑖𝑖 2 (−1)𝑖𝑖
𝑞𝑞 2𝜋𝜋
𝜇𝜇𝑒𝑒𝑞𝑞 = ∑∞𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) 𝑒𝑒 −18𝜋𝜋 , (41)
𝑗𝑗2𝜋𝜋 𝑞𝑞 𝑖𝑖
𝑖𝑖≠0

1
𝑞𝑞 2 𝑞𝑞 2 2𝜋𝜋 1 1 2 𝑖𝑖 2
𝑒𝑒𝑞𝑞2 = + ∑∞𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) ( + ) 𝑒𝑒 −18𝜋𝜋 (−1)𝑖𝑖 . (42)
9 2 𝑞𝑞 𝜋𝜋2 𝑖𝑖 2 9
𝑖𝑖≠0

The variance of the random variable eq is calculated from the formula:


1 1
𝑞𝑞 2 𝑞𝑞 2 2𝜋𝜋 1 1 2 𝑖𝑖 2 𝑞𝑞 2 2𝜋𝜋 2𝜋𝜋 2 (𝑖𝑖 2 +𝑘𝑘 2 ) (−1)𝑖𝑖+𝑘𝑘
𝑉𝑉𝑉𝑉𝑉𝑉[𝑒𝑒𝑞𝑞 ] = + ∑∞
𝑖𝑖=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) ( + ) 𝑒𝑒 −18𝜋𝜋 (−1)𝑖𝑖 + ∑∞ ∞
𝑖𝑖=−∞ ∑𝑘𝑘=−∞ 𝛷𝛷𝑥𝑥 ( 𝑖𝑖) 𝛷𝛷𝑥𝑥 ( 𝑘𝑘) 𝑒𝑒 −18𝜋𝜋 . (43)
9 2 𝑞𝑞 𝜋𝜋2 𝑖𝑖 2 9 4π2 𝑞𝑞 𝑞𝑞 𝑖𝑖𝑖𝑖
𝑖𝑖≠0 𝑖𝑖≠0 𝑘𝑘≠0

For x(v)=0, v=2i/q, i=1, 2, … , the function 𝑓𝑓𝑒𝑒𝑞𝑞 (𝑥𝑥) assumes the form (39). Additionally:
764 Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768
Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2019) 000–000 11

1 2 𝑣𝑣 2
𝑞𝑞𝑞𝑞
𝛷𝛷𝑒𝑒𝑞𝑞 (𝑣𝑣) = sinc ( ) 𝑒𝑒 −72𝑞𝑞 , (44)
2

𝜇𝜇𝑒𝑒𝑞𝑞 = 0, (45)

𝑞𝑞 2
𝑉𝑉𝑉𝑉𝑉𝑉[𝑒𝑒𝑞𝑞 ] = 𝑒𝑒𝑞𝑞2 = . (46)
9

Using (44) the k-th moment of the random variable eq can be determined:

∑𝑘𝑘𝑖𝑖=0 (𝑘𝑘 ) 3𝑖𝑖−𝑘𝑘 2− 2 𝛤𝛤 ( (𝑘𝑘 − 𝑖𝑖 + 1))  𝑞𝑞 𝑘𝑘


𝑘𝑘+𝑖𝑖 (1+(−1)𝑖𝑖 )(1+(−1)𝑘𝑘 )
1 1
𝐸𝐸[𝑒𝑒𝑞𝑞𝑘𝑘 ] = , (47)
4√𝜋𝜋 𝑖𝑖 2 𝑖𝑖+1

where () is the Gamma function [17].


In this case, the effects of randomized quantization correspond to quantization with a Gaussian distributed dither
signal with the standard deviation q/6.

4. Research

The results from Chapter 3 can be used in practice to investigate the properties of the signal parameters and
characteristics. To this end, the bias and variance of the estimator of a specific signal are determined. The bias and
variance, respectively, are the systematic and random components of the estimator mean-square error [16].
Let us assume that the samples x[i] of a certain signal have undergone quantization with the step q. In order to limit
the quantization effects, the process has been repeated applying a quantizer randomization technique and dithering
corresponding with this technique. This means that the samples x[i] of the original signal have been quantized in the
presence of the samples of the randomizing signal, or, respectively, the samples y[i] being a sum of the original signal
samples x[i] and the dither samples d[i] have been quantized.
We shall assume that property (4) is satisfied. It is shown in Chapter 2 that randomized quantization techniques
and those with dither are then equivalent.
Let us use this fact to investigate the properties of the mean value estimator.
Let us examine the estimator of the mean value of the random variable x calculated on the basis of M quantized
samples yq[i]:

1 𝑀𝑀−1
𝑦𝑦̄𝑞𝑞 = ∑𝑖𝑖=0 𝑦𝑦𝑞𝑞 [𝑖𝑖]. (48)
𝑀𝑀

It can be shown that estimator (48) is biased, i.e., the error:


(48)
𝑏𝑏𝑦𝑦̄ 𝑞𝑞 = 𝐸𝐸[𝑦𝑦̄𝑞𝑞 ] − 𝐸𝐸[𝑥𝑥] = 𝐸𝐸[𝑦𝑦𝑞𝑞 ] − 𝐸𝐸[𝑥𝑥] (49)

is
bydifferent
The
 
= E y q from
q estimator
− E zero.
x  = E y − E x   
variance isqcalculated on the(49)
basis of the formula:

1
Var  yq  = E ( yq )  − E 2  yq  = ( )
2
E  yq2  − E 2  yq  .
 2 M 1 
𝑉𝑉𝑉𝑉𝑉𝑉[𝑦𝑦̄𝑞𝑞 ] = 𝐸𝐸 [(𝑦𝑦̄𝑞𝑞 ) ] − 𝐸𝐸 [𝑦𝑦̄𝑞𝑞 ] = (𝐸𝐸[𝑦𝑦𝑞𝑞2 ] − 𝐸𝐸 2 [𝑦𝑦𝑞𝑞 ]). (50)
2
(50)
𝑀𝑀

The estimator mean-square error is equal to:

1 2
𝑀𝑀𝑀𝑀𝑀𝑀[𝑦𝑦̄𝑞𝑞 ] = 𝑉𝑉𝑉𝑉𝑉𝑉[𝑦𝑦̄𝑞𝑞 ] + 𝑏𝑏𝑦𝑦̄2𝑞𝑞 = (𝐸𝐸[𝑦𝑦𝑞𝑞2 ] − 𝐸𝐸 2 [𝑦𝑦𝑞𝑞 ]) + (𝐸𝐸[𝑦𝑦𝑞𝑞 ] − 𝐸𝐸[𝑥𝑥]) . (51)
𝑀𝑀
12 Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2022) 000–000
Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768 765

To determine the values of quantities (49) – (51) we apply ordinary first– and second–order moments of the random
x and yq [14]:
variables(51)

𝑑𝑑
𝐸𝐸[𝑥𝑥] = −𝑗𝑗 𝛷𝛷 (𝑣𝑣)| ,
𝑑𝑑𝑑𝑑 𝑥𝑥 𝑣𝑣 = 0
𝑑𝑑
𝐸𝐸[𝑦𝑦𝑞𝑞 ] = −𝑗𝑗 𝛷𝛷 (𝑣𝑣)| ,
𝑑𝑑𝑑𝑑 𝑦𝑦𝑞𝑞 𝑣𝑣 = 0
𝑑𝑑
𝐸𝐸[𝑦𝑦𝑞𝑞2 ] = − 𝛷𝛷𝑦𝑦𝑞𝑞 (𝑣𝑣)| , (52)
𝑑𝑑𝑑𝑑 𝑣𝑣 = 0

where:
(52)
2𝜋𝜋 𝑞𝑞𝑞𝑞
𝛷𝛷𝑦𝑦𝑞𝑞 (𝑣𝑣) = ∑∞𝑖𝑖=−∞ 𝛷𝛷𝑦𝑦 (𝑣𝑣 + 𝑖𝑖) sinc ( + 𝜋𝜋𝜋𝜋). (53)
𝑞𝑞 2

Since 𝑦𝑦 = 𝑥𝑥 + 𝑑𝑑 is a sum of the independent random variables x and d, then y(v)= x(v) d(v). Hence:
(53)
2𝜋𝜋 2𝜋𝜋 𝑞𝑞𝑞𝑞
𝛷𝛷𝑦𝑦𝑞𝑞 (𝑣𝑣) = ∑∞𝑖𝑖=−∞ 𝛷𝛷𝑧𝑧 (𝑣𝑣 + 𝑖𝑖) 𝛷𝛷𝑑𝑑 (𝑣𝑣 + 𝑖𝑖) sinc ( + 𝜋𝜋𝜋𝜋). (54)
𝑞𝑞 𝑞𝑞 2

It follows from (54) that in order to determine the values of quantities (49) – (51) it is necessary to establish the
form of the characteristic functions of the random variables x and d.
(54)
If a sinusoidal signal is being quantized, then [14]:

𝛷𝛷𝑧𝑧 (𝑣𝑣) = 𝑒𝑒 𝑗𝑗𝐴𝐴0𝑣𝑣 𝐽𝐽0 (𝐴𝐴𝐴𝐴), (55)

where AR+ \{0} is the signal amplitude, A0R is the signal constant component, J0() is the Bessel function of
zero-order
(55) [17].
Let us assume that a technique of quantizer randomization with a uniform distribution random signal over the
interval [0, 1] has been applied. This corresponds to quantization with dither with the amplitude Ad=q/2. It means that
[14]:

𝑞𝑞
𝛷𝛷𝑑𝑑 (𝑣𝑣) = sinc(𝐴𝐴𝑑𝑑 𝑣𝑣) = sinc ( 𝑣𝑣). (56)
2

If a quantizer is randomized with a Simpson distribution random signal over the interval [0, 1], then the dither has
(56) Ad=q/2, and [14]:
the amplitude

𝐴𝐴𝑑𝑑 𝑞𝑞
𝛷𝛷𝑑𝑑 (57)
(𝑣𝑣) = sinc ( 𝑣𝑣) = sinc 2 ( 𝑣𝑣). (57)
2 4

In the case when the randomizing signal is a Gaussian random signal with the parameters  =1/6 and  =1/2, then
the dither has the standard deviation d =q/6. Moreover [14]:

1 2 2
− q v
 d ( v=
) e−0.5 d v= e
2 2
71
.
(58)
766 Elżbieta Jerzy
Elżbieta Kawecka, Kawecka et al. / Procedia
Podhajecki / ProcediaComputer
ComputerScience
Science207
00 (2022)
(2019) 754–768
000–000 13

2 2 1 2 𝑣𝑣 2
𝛷𝛷𝑑𝑑 (𝑣𝑣) = 𝑒𝑒 −0.5𝜎𝜎𝑑𝑑 𝑣𝑣 = 𝑒𝑒 −71𝑞𝑞 . (58)

The research results are illustrated in Fig. 9-14 by example diagrams of quantities (49) – (51). The errors and
variances have been determined assuming A0=1.234, q=1, q=0.01, M=1000. On the diagrams the results obtained for
deterministic quantizer have been determined by 1. The results obtained for quantizer randomized with Gaussian,
Simpson and uniform random signals have been determined by 2,3,4 respectively.

a b

. 10− 4
0.2 2
q=1 3 4 q=0.01
0.1 0
4
3
0 −2 2
byq byq
− 0.1 −4 1

2
− 0.2 −6
1
− 0.3 −8
0 0.5 1 A 1.5 2 0 0.5 1 A 1.5 2

Fig. 9. Diagram of quantity (49) for (a) q=1, (b) q=0.01.

a b
0.03 0.025
q=1 q=0.01
0.02
0.02
0.015
Var  yq  Var  yq 
4 2 0.01 1,2,3,4
0.01
−3
3 1 5 . 10

0 0
0 0.5 1 A 1.5 2 0 0.5 1 A 1.5 2

Fig. 10. Diagram of quantity (50) for (a) q=1, (b) q=0.01.
Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768 767
14 Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2022) 000–000

a b
0.03 0.025
q=1 q=0.01
0.02
0.02
0.015
Var  yq  MSE  yq 
4 2 0.01 1,2,3,4
0.01
−3
3 1 5 . 10

0 0
0 0.5 1 A 1.5 2 0 0.5 1 A 1.5 2

Fig. 11. Diagram of quantity (51) for (a) q=1, (b) q=0.01.

Notice that randomized quantization or the corresponding to it quantization with dither results in limiting the
influence of quantization. The absolute values of the systematic error of the mean value estimator decrease along with
an increase in the amplitude of the original signal. They may even be eliminated if the quantizer is randomized with a
uniform distribution random signal. The obtained results show that the randomization of the quantizer with such a
signal yields the best results. Yet in practice, the estimator variance also ought to be kept in mind. Along with an
increase in the quantizer resolution, the significance of the bias rapidly decreases and the variance becomes the
dominant component of the mean-square error. Notice that regardless of the resolution, quantizer randomization with
a Gaussian signal yields the smallest variance values. Thus, such a signal allows to better control the estimator mean-
square error.

5. Conclusions

In the present paper we have analyzed the properties of the deterministic and randomized quantizer. It has been
shown that randomization reduces the effects of quantization. We have discussed examples of randomizing signals
and adjusted their parameters so as to introduce an appropriate change in the probabilistic properties of the quantizer.
The predicted alteration of the probabilistic properties of the quantizer will occur in the case of a high bit resolution
quantizer. Low-bit, bit stack, or output code losing quantizers may not yield the required quantization error density. It
has been demonstrated that that randomized quantization is characterized by a greater quantization error variance as
compared to deterministic quantization.
Randomized quantization is reducible to dither quantization, which has been well researched in the literature. In
this paper we have discussed the appropriate modification of the mathematical tools which make this operation
possible. In the dither quantization case, it is necessary to take into account the dither signal in the quantization process.
Furthermore, the quantization operation error is not equivalent to quantization noise. By using randomized
quantization, the noise introduced into the quantizer is not taken into account in the effect of the quantization operation.
In this case the quantization error is equivalent to the quantization noise.
The quantizer randomization cases which have already been discussed in the literature usually refer to non-
Gaussian signals. In practice, these solutions seem to be difficult to apply because they require that an appropriate
randomizing signal be introduced into the quantizer, and the results of quantization depend on the fitted parameters
of the input randomizing signal.
In this paper we propose using a Gaussian signal to randomize the quantizer. This signal has better quantization
error suppression qualities even at a small number of quantization levels. The Gaussian signal is also simpler to
generate in realistic conditions. Moreover, such a signal allows to better control the estimator errors determined on
the basis of quantized data. By carrying out mathematical analyses we have shown that for the case where the
768 Elżbieta Kawecka et al. / Procedia Computer Science 207 (2022) 754–768
Elżbieta Kawecka, Jerzy Podhajecki / Procedia Computer Science 00 (2019) 000–000 15

quantized signal and the quantization error are uncorrelated, for the same measurement conditions the variance of
quantization error is the smallest for the Gaussian signal randomized quantizer and the greatest for uniformly
distributed signal randomized quantizer. We believe that when the parameters of randomizing signals are comparable,
the best results are obtained for the Gaussian signal.

References

[1] R. Zamir, M. Feder. (1992) “On Universal Quantization by Randomized Uniform/Lattice Quantizers.” IEEE Trans. on Information Theory.
38 (2): 428–436.
[2] H. Berndt, (2001) “Differentially Randomized Quantization in Sigma-delta Analog-to-Digital Converters.” Electronics, Circuits and Systems.
ICECS 2001, The 8th IEEE International Conference on. 2: 1057–1060.
[3] G. Chiorboli. (2003) “Uncertainty of Mean Value and Variance Obtained From Quantized Data.” IEEE Trans. on Instrumentation and
Measurement. 52 (4): 1273–1278.
[4] I. Bilinskis. (2007) “Digital Alias-Free Signal Processing”. Wiley.
[5] E. Akyol, K. Rose. (2012). “On Constrained Randomized Quantization.” IEEE Trans. on Signal Processing. Cornell University Library.
[Online]. http://arxiv.org/pdf/1206.2974v1.pdf
[6] R. M. Gray. Jr, T. G. Stockham. (1993). “Dithered Quantizers.” IEEE Trans. on Information Theory. 39 (3): 805–812.
[7] M. F. Wagdy. (1989). “Effect of Various Dither Forms on Quantization Errors of Ideal A/D Converters.” IEEE Trans. on Instrumentations
and Measurements. 38 (4): 850–855.
[8] A. Domańska (1995). “Influencing the Reliability in Measurement Systems by the Application of A-D Conversion with Dither Signal.”
University of Technology Publisher. Poznań (in Polish).
[9] M. F. Wagdy. (1996). “Effect of Additive Dither on the Resolution of ADC’s with Single-Bit or Multibit Errors.” IEEE Trans. on
Instrumentations and Measurements. 45 (2): 610–615.
[10] J. Janiczek. (1990). “Correction of Static Characteristics of the Measuring Circuits by Shaping the Conversion Functions of A-D Converters.”
Metrology and Measurement Systems, Monograph, 5 (in Polish).
[11] National Instruments. Technical Documentation of Single-Channel High-Speed Digitizer PXI-562x. Available:
http://www.ni.com/pdf/manuals/322949c.pdf (access 20.02.2022).
[12] Texas Instruments. Technical Documentation of Analog-to-Digital Converter, 16-bit, 80/105/135-MSPS. Available:
http://www.ti.com/lit/ds/slas610c/slas610c.pdf (access 20.02.2022).
[13] Analog Devices. Technical Documentation of Analog-to-Digital Converter, 16-bit, 80 MSPS/105 MSPS/125 MSPS, 1.8 V Dual. Available:
https://www.analog.com/media/en/technical-documentation/data-sheets/AD9268.pdf (access 20.02.2022).
[14] B. Widrow, I. Kollar. (2008) “Quantization Noise, Roundoff Error in Digital Computation, Signal Processing”. Control and Communications.
Cambridge University Press.
[15] A. B. Sripad, D. L. Snyder. (1977). “A Necessary and Sufficient Condition for Quantization Errors to be Uniform and White.” IEEE Trans.
on Acoustics Speech and Signal Processing. 25 (5): 442–448.
[16] A. Papoulis, S. U. Pillai (2002). “Probability. Random Variables, and Stochastic Processes.” McGraw-Hill. New York.
[17] G. T. Korn (2002). “Mathematical Handbook for Scientists and Engineers.” McGraw-Hill. New York.
[18] Arkady Molev-Shteiman, Xiao-Feng Qi, Laurence Mailander, Narayan Prasad, Bertrand M. Hochwald (2020). “New Equivalent Model of a
Quantizer With Noisy Input and Its Applications for MIMO System Analysis and Design.” IEEE Access, 8. DOI:
10.1109/ACCESS.2020.3021666
[19] Yoojin Choi Mostafa El-Khamy Jungwon Lee (2019). “Universal Deep Neural Network Compression.” SoC R&D, Samsung Semiconductor
Inc. San Diego, CA 92121, USA.
[20] Tom Bäckström, Johannes Fischer, Sneha Das. (2018). “Dithered Quantization for Frequency-Domain Speech and Audio Coding.” In
Proceedings of Interspeech. International Speech Communication Association. 3533-3537. https://doi.org/10.21437/Interspeech.2018-46
[21] Krajewski, M., Sienkowski, S., Kawecka, E., (2018) “Properties of selected frequency estimation algorithms in accurate sinusoidal voltage
measurements.” Przegląd Elektrotechniczny, 94 (12): 52-55, https://doi.org/10.15199/48.2018.12.12 (in Polish).
[22] Lal-Jadziak J., Sienkowski, S., (2009). “Variance of random signal mean square value digital estimator.” Metrology and Measurement
Systems, ISSN: 0860-8229, 16 (2) 267–277.

You might also like