You are on page 1of 47

Chapter 2

Random Variables

1. Random-number generation
2. Random-variable generation

© Kleber Barcia V.
Chapter 2

1. Objectives. Random numbers

 Discuss how the random numbers are


generated.

 Introduce the subsequent testing for


randomness:
 Frequency test
 Autocorrelation test.

2
© Kleber Barcia V.
Chapter 2

Properties of random numbers

 Two important statistical properties:


 Uniformity
 Independence.
 Random Number, Ri, must be independently drawn from a uniform
distribution with pdf:

1, 0  x  1 E r  
a  b 
f ( x)   2
0, otherwise b  a
2
V r  
12
 The expected value and the
Figure: pdf for random numbers
variance of each random number is:
2 1 3 1 2
x 1 1 1 1 1
V (r )   x 2 dx  E r 
1 x
E (r )   xdx 
1
      
2

0 2 0
2 0 3 0  2  3 4 12
3
© Kleber Barcia V.
Chapter 2

Generation of pseudo-random numbers

 “Pseudo”, because generating numbers using a known


method removes the potential for true randomness.
 Goal: To produce a sequence of numbers in [0,1] that
simulates, or imitates, the ideal properties of random numbers
(RN).
 Important considerations to select RN generator:
 Fast
 Portable to different computers
 Have sufficiently long cycle
 Replicable
 Closely approximate the ideal statistical properties of uniformity
and independence.

4
© Kleber Barcia V.
Chapter 2

Techniques to generate random numbers

 Middle-square algorithm
 Middle-product algorithm
 Constant multiplier algorithm
 Linear algorithm
 Multiplicative congruential algorithm
 Additive congruential algorithm
 Congruential non-linear algorithms
 Quadratic
 Blumy Shub
5
© Kleber Barcia V.
Chapter 2

Middle-square algorithm

1. Select a seed x0 with D digits (D > 3)


2. Rises to the square: (x0)2
3. Takes the four digits of the Center = x1
4. Rises to the square: (x1)2
5. Takes the four digits of the Center = x2
6. And so on. Then convert these values to r1 between
0 and 1

If it is not possible to obtain the D digits in the Center,


add zeros to the left.
6
© Kleber Barcia V.
Chapter 2

Middle-square algorithm

If x0 = 5735
Then,
Y0 = (5735)2 = 32890225 x1 = 8902 r1 = 0.8902
Y1 = (8902)2 = 79245604 x2 = 2456 r2 = 0.2456
Y2 = (2456)2 = 06031936 x3 = 0319 r3 = 0.0319
Y3 = (0319)2 = 101760 x4 = 0176 r4 = 0.0176
Y4 = (0176)2 = 030976 x5 = 3097 r5 = 0.3097

1. Small cycles
2. May lose the series (e.g.: X0 = 0012)

7
© Kleber Barcia V.
Chapter 2

Middle-product algorithm

It requires 2 seeds of D digits

1. Select 2 seeds x0 and x1 with D > 3


2. Y0 = x0* x1 xi = digits of the center
3. Yi = x1*xi xi+1 = digits of the center

If it is not possible to obtain the D digits in the center, add


zeros to the left.

8
© Kleber Barcia V.
Chapter 2

Middle-product algorithm

Example:

Seeds x0 = 5015 y x1 = 5734

Y0 = (5015)(5734) = 28756010 x2 = 7560 r1 = 0.756


Y1 = (5734)(7560) = 43349040 x3 = 3490 r2 = 0.349
Y2 = (7560)(3490) = 26384400 x4 = 3844 r3 = 0.3844
Y3 = (3490)(3844) = 13415560 x5 = 4155 r4 = 0.4155
Y4 = (3844)(4155) = 15971820 x6 = 9718 r5 = 0.9718

9
© Kleber Barcia V.
Chapter 2

Constant multiplier algorithm

Similar to Middle-product algorithm

1. Select a seed x0 (D > 3)


2. Select a constant “a” (D > 3)
3. Y0 = a*x0 xi = D digits of the center
4. Yi = a*xi xi+1= D digits of the center
5. Repeat until you get the desired random numbers

10
© Kleber Barcia V.
Chapter 2

Constant multiplier algorithm

Example:

If: seed x0 = 9803 and constant a = 6965

Y0 = (6965)(9803) = 68277895 x1 = 2778 r1 = 0.2778


Y1 = (6965)(2778) = 19348770 x2 = 3487 r2 = 0.3487
Y2 = (6965)(3487) = 24286955 x3 = 2869 r3 = 0.2869
Y3 = (6965)(2869) = 19982585 x4 = 9825 r4 = 0.9825
Y4 = (6965)(9825) = 68431125 x5 = 4311 r5 = 0.4311

11
© Kleber Barcia V.
Chapter 2

Linear algorithm

 To produce a sequence of integers, X1, X2, … between 0 and


m-1 by following a recursive relationship:

X i 1  (aX i  c) mod m, i  0,1,2,...

The The The


multiplier increment modulus

 The selection of the values for a, c, m, and X0 affects the


statistical properties and the cycle length, must be > 0
 "mod (m)" is the residue of the term (axi + c) /m
 The random numbers in the interval [0,1], are generated by the
following relation:
X i 1
Ri  , i  1,2,...
m 1 12
© Kleber Barcia V.
Chapter 2

Linear algorithm

Example:

 Use X0 = 27, a = 17, c = 43 y m = 100

 The values of xi and ri are:


X1 = (17*27+43) mod 100 = 502 mod 100 = 2 r1 = 2/99 = 0.02
X2 = (17*2+43) mod 100 = 77 mod 100 = 77 r2 = 77/99 = 0.77
X3 = (17*77+43) mod 100 = 1352 mod,100 = 52 r3 = 52/99 = 0.52

 The maximum life period N is equal to m

13
© Kleber Barcia V.
Chapter 2

Multiplicative congruential algorithm

It arises from the linear algorithm when c = 0. The recursive


equation is:
X i 1  aX i modm, i  0,1,2,...
The values of a, m, and x0 must be > 0 and integers
X i 1
ri  , i  1,2,...
m 1
 The conditions that must be fulfilled to achieve the
maximum period are:
 m = 2g
 g must be integer
 a = 3 + 8 k or 5 + 8 k where k = 0, 1, 2, 3,...
 X0 = should be an odd number
 With these conditions we achieve N = m/4 = 2g-2
14
© Kleber Barcia V.
Chapter 2

Multiplicative congruential algorithm

Example: Use x0 = 17, k = 2, g = 5

Then: a = 5+8(2) = 21 and m=25 = 32

 x1 = (21*17) mod 32 = 5 r1 = 5/31 = 0.1612


 x2 = (21*5) mod 32 = 9 r2 = 9/31 = 0.2903
 x3 = (21*9) mod 32 = 29 r3 = 29/31 = 0.9354
 x4 = (21*29) mod 32 = 1 r4 = 1/31 = 0.3225
 x5 = (21*1) mod 32 = 21 r5 = 21/31 = 0.6774
 x6 = (21*21) mod 32 = 25 r6 = 25/31 = 0.8064
 x7 = (21*25) mod 32 = 13 r7 = 13/31 = 0.4193
 x8 = (21*13) mod 32 = 17 r8 = 17/31 = 0.5483
15
© Kleber Barcia V.
Chapter 2

Additive congruential algorithm

 Requires a sequence of n integer numbers x1, x2, x3,…xn


to generate new random numbers starting at xn+1, xn+2,…

 The recursive equation is:

xi  xi 1  xi n modm, i  n  1, n  2,..., N

Xi
ri 
m 1

16
© Kleber Barcia V.
Chapter 2

Additive congruential algorithm

Example:

x1 =65, x2 = 89, x3 = 98, x4 = 03, x5 = 69, m = 100

x6=(x5+x1)mod100 = (69+65)mod100 = 34 r6= 34/99 = 0.3434


x7=(x6+x2)mod100 = (34+89)mod100 = 23 r7= 23/99 = 0.2323
x8=(x7+x3)mod100 = (23+98)mod100 = 21 r8= 21/99 = 0.2121
x9=(x8+x4)mod100 = (21+03)mod100 = 24 r9= 24/99 = 0.2424
x10=(x9+x5)mod100 = (24+69)mod100 = 93 r10= 93/99 = 0.9393
x11=(x10+x6)mod100 = (93+34)mod100 = 27 r11= 27/99 = 0.2727
x12=(x11+x7)mod100 = (27+23)mod100 = 50 r12= 50/99 = 0.5050

17
© Kleber Barcia V.
Chapter 2

Congruential non-linear algorithms

Quadratic:

 
xi 1  ax 2  bx  c mod m i  0,1,2,3,..., N

Conditions to achieve a maximum period N = m


m = 2g
g must be integer
a must be an even number
c must be an odd number
(b-a)mod4 = 1
X i 1
ri 
m 1
18
© Kleber Barcia V.
Chapter 2

Congruential non-linear algorithms (quadratic)

Example:  
xi 1  ax 2  bx  c mod m i  0,1,2,3,..., N

x0 = 13, m = 8, a = 26, b = 27, c = 27


x1 = (26*132+27*13+27)mod 8 = 4
x2 = (26*42+27*4+27)mod 8 = 7
x3 = (26*72+27*7+27)mod 8 = 2
x4 = (26*22+27*2+27)mod 8 = 1
x5 = (26*12+27*1+27)mod 8 = 0
x6 = (26*02+27*0+27)mod 8 = 3
x7 = (26*32+27*3+27)mod 8 =6
x8 = (26*62+27*6+27)mod 8 = 5
x9 = (26*52+27*5+27)mod 8 = 4
19
© Kleber Barcia V.
Chapter 2

Congruential non-linear algorithms

Algorithm Blum, Blumy Shub

It occurs when Quadratic is:

a=1
b=0
c=0

xi 1  xi  mod m  i  0,1,2,3,..., n


2

20
© Kleber Barcia V.
Chapter 2

Characteristics of a good generator

 Maximum Density
 Such that the values assumed by Ri, i = 1,2,…, leave no large
gaps on [0,1]
 Solution: a very large integer for modulus m

 Maximum Period
 To achieve maximum density and avoid cycling.
 Achieve by: proper choice of a, c, m, and X0.
 Most digital computers use a binary representation of
numbers
 Speed and efficiency depend on the modulus, m. It can be
consider as a big number (2b)

21
© Kleber Barcia V. Rev. 1
Chapter 2

Tests for Random Numbers

 Two categories:
 Testing for uniformity:
H0: Ri ~ U[0,1]
H1: Ri ~/ U[0,1]
 Test for independence:
H0: Ri ~ independently
H1: Ri ~/ independently
 Mean test
H0: µri = 0.5
H1: µri =/ 0.5
We can use the normal distribution with σ2 = 1/12
or σ = 0.2887 with a significance level α = 0.05
22
© Kleber Barcia V.
Chapter 2

Tests for Random Numbers

 When to use these tests:


 If a well-known simulation languages or random-number
generators is used, it is probably unnecessary to test
 If the generator is not explicitly known or documented, e.g.,
spreadsheet programs, symbolic/numerical calculators, tests
should be applied to the sample numbers.
 Types of tests:
 Theoretical tests: evaluate the choices of m, a, and c without
actually generating any numbers
 Empirical tests: applied to actual sequences of numbers
produced. Our emphasis.

23
© Kleber Barcia V. Rev. 1
Chapter 2

Kolmogorov-Smirnov test [Uniformity Test]

 Very useful for little data.


 For discrete and continuous variables.

 Procedure:
1. Arrange the ri from lowest to highest
2. Determine D+, D-
3. Determine D
4. Determine the critical value of Dα, n
5. If D> Dα, n then ri are not uniform

24
© Kleber Barcia V.
Chapter 2

Kolmogorov-Smirnov test [Uniformity Test]

 Example: Suppose that 5 generated numbers are 0.44,


0.81, 0.14, 0.05, 0.93.
R(i) 0.05 0.14 0.44 0.81 0.93 Sort R (i) Ascending
Step 1:
i/N 0.20 0.40 0.60 0.80 1.00
D+ = max {i/N – R(i)}
i/N – R(i) 0.15 0.26 0.16 - 0.07
Step 2:
R(i) – (i-1)/N 0.05 - 0.04 0.21 0.13
D- = max {R(i) - (i-1)/N}

Step 3: D = max(D+, D-) = 0.26


Step 4: For a = 0.05,
Da, n = 0.565 > D

So not reject H0.

25
© Kleber Barcia V.
Chapter 2

Chi-square test [Uniformity Test]

 The Chi-square test uses the statistical sample:


n is the # of classes Ei is the expected # of
n
(Oi  Ei ) 2 the class ith
 
2
0
i 1 Ei Oi is the observed # of
the class ith

 The chi-square distribution with n-1 degrees of freedom is


tabulated in Table A.6
 For a uniform distribution, the expected number Ei, in each class
is:
N
Ei  , where N is the total # of observation
n

 Valid only for large samples, example: N> = 50


 If X02>X2(α; n-1) reject the hypothesis of uniformity
26
© Kleber Barcia V.
Chapter 2

Chi-square test [Uniformity Test]

 Example: Use Chi-square with a = 0.05 to test that the data shown
are uniformly distributed
0.34 0.90 0.25 0.89 0.87 0.44 0.12 0.21 0.46 0.67
0.83 0.76 0.79 0.64 0.70 0.81 0.94 0.74 0.22 0.74
0.96 0.99 0.77 0.67 0.56 0.41 0.52 0.73 0.99 0.02
0.47 0.30 0.17 0.82 0.56 0.05 0.45 0.31 0.78 0.05
0.79 0.71 0.23 0.19 0.82 0.93 0.65 0.37 0.39 0.42
0.99 0.17 0.99 0.46 0.05 0.66 0.10 0.42 0.18 0.49
0.37 0.51 0.54 0.01 0.81 0.28 0.69 0.34 0.75 0.49
0.72 0.43 0.56 0.97 0.30 0.94 0.96 0.58 0.73 0.05
0.06 0.39 0.84 0.24 0.40 0.64 0.40 0.19 0.79 0.62
0.18 0.26 0.97 0.88 0.64 0.47 0.60 0.11 0.29 0.78

 The test uses n = 10 intervals of equal length:

[0, 0.1), [0.1, 0.2), …, [0.9, 1.0)


27
© Kleber Barcia V.
Chapter 2

Chi-square test [Uniformity Test]

(Oi-Ei)2
Interval Oi Ei Oi-Ei (Oi-Ei)2
Ei
1 8 10 -2 4 0.4
2 8 10 -2 4 0.4
3 10 10 0 0 0.0
4 9 10 -1 1 0.1
5 12 10 2 4 0.4
6 8 10 -2 4 0.4
7 10 10 0 0 0.0
8 14 10 4 16 1.6
9 10 10 0 0 0.0
10 11 10 1 1 0.1
100 100 0 3.4

 The X02 value = 3.4


 Compared to the value of Table A.6, X20.05, 9 = 16.9
 Then H0 is not rejected.
28
© Kleber Barcia V.
Chapter 2

Tests for Autocorrelation [Test of Independence]

 It is a test between each m numbers, starting with the


number i
 The auto-correlation rim between the numbers Ri, Ri+m, Ri+2m,
Ri+(M+1)m
 M is the largest integer such that, i  (M  1 )m  N , where N is
the total number of values ​in the sequence
 Hypothesis:
H 0 : r im  0, numbers are independent
H1 : r im  0, numbers are NOT independent

 If the values ​are not correlated:


 For large values ​of M, the distribution estimation rim, is
approximately a normal distribution.

29
© Kleber Barcia V.
Chapter 2

Tests for Autocorrelation [Test of Independence]

 The statistical test is: rˆ im


Z0 
ˆ rˆ im

 Z0 is distributed normally with mean = 0 and variance = 1, and:

1 M 
ρˆ im   
M  1  k 0
Ri  km i (k 1 )m   0.25
R

13M  7
σˆ ρim 
12(M  1 )

 If rim > 0, the subsequence has positive autocorrelation


 High random numbers tend to be followed by high ones, and vice versa.
 If rim < 0, the subsequence has negative autocorrelation
 Low random numbers tend to be followed by high ones, and vice versa.
30
© Kleber Barcia V.
Chapter 2

Example [Test for autocorrelation]

 Test if the 3rd, 8th, 13th and so on, are auto-correlated.


Use a = 0.05

Then i = 3, m = 5, N = 30
0.12 0.01 0.23 0.28 0.89 0.31 0.64 0.28 0.83 0.93

0.99 0.15 0.33 0.35 0.91 0.41 0.60 0.27 0.75 0.88

0.68 0.49 0.05 0.43 0.95 0.58 0.19 0.36 0.69 0.87

i  (M  1 )m  N
3  4  15  30
28  30

Then M = 4
31
© Kleber Barcia V.
Chapter 2

Example [Test for autocorrelation]

1 (0.23)(0.28)  (0.28)(0.33)  (0.33)(0.27)


ρˆ 35     0.25
4  1  (0.27)(0.05)  (0.05)(0.36) 

 0.1945

13(4)  7
σˆ ρ35   0.128
12( 4  1 )

0.1945
Z0    1.516
0.1280

 From Table A.3, z0.025 = 1.96. then the hypothesis is not


rejected.

32
© Kleber Barcia V.
Chapter 2

Other tests of Independence

 Test runs up and down


 Test runs up and below the mean
 Test poker
 Test series
 Test holes

-----HW 4 CHAP 2 -----


Bonus: (3 points for the partial exam)

Write a computer program that generates random numbers (4 digits)


using the Multiplicative Congruential Algorithm. Allow user to enter
values ​x0, k, g

Give the program file


33
© Kleber Barcia V.
Chapter 2

2. Objectives. Random Variables

 Understand how to generate random variables


from a specific distribution as inputs of a
simulation model.

 Illustrate some used techniques for generating


random variables.
 Inverse-transform technique
 Acceptance- rejection technique

34
© Kleber Barcia V.
Chapter 2

Inverse-transform technique

 It can be used to generate random variables from an


exponential distribution, uniform, Weibull, triangular and
empirical distribution.
 Concept:
 For cdf function: r = F(x)
r = F(x)
 Find x: r1

x = F-1(r)
x1

35
© Kleber Barcia V.
Chapter 2

Exponential Distribution [inverse-transform]

 The exponential cdf:


r= F(x)
r= 1 – e -l x for x  0

e  lX  1  r
 lX  ln(1  r )
1
X  ln(1  r )
l
To generate X1, X2, X3 …
Xi = F-1(ri)
= -(1/l ln(1-ri) Figure: Inverse-transform
technique for exp (l = 1)
= -(1/l) ln(ri)

36
© Kleber Barcia V.
Chapter 2

Table of Distributions [inverse-transform]

Distribution Cumulative Inverse-Transform

Uniform F x   x  a  a xb xi  a  b  a ri


b  a 
 a  x  a  ln1  r 
F x   1  e
 x 1
Weibull x0 

F x   x
2
0  x 1 x  2r 0  r  1
2 2
Triangular
F x   1  2  x 2
1  x  2 x  2  21  r 
1  r 1
2
2

Normal x    z

37
© Kleber Barcia V.
Chapter 2

Exponential Distribution [inverse-transform]

 Example: Generate 200 variables Xi with exponential distribution


(l = 1)
i 1 2 3 4 5
Ri 0.1306 0.0422 0.6597 0.7965 0.7696
Xi 0.1400 0.0431 1.078 1.592 1.468

The histograms of ri and xi are:


Uniform distribution Exponential distribution

38
© Kleber Barcia V.
Chapter 2

Uniform Distribution [Example]

 The temperature of an oven


behaves uniformly within the xi  a  b  a ri
range of 95 to 100 ° C.
Modeling the behavior of the
temperature.

Measurements ri Temperature oC
1 0.48 95+5*0.48=97.40
2 0.82 99.10
3 0.69 98.45
4 0.67 98.35
5 0.00 95.00
39
© Kleber Barcia V.
Chapter 2

Exponential Distribution [Example]

 Historical data service time of a


Bank, behaves exponentially with
an average of 3 min/customer. xi  3 ln1  ri 
Simulate the behavior of this
random variable

Customer ri Service Time(min)


1 0.36 3.06
2 0.17 5.31
3 0.97 0.09
4 0.50 2.07
5 0.21 0.70
40
© Kleber Barcia V.
Chapter 2

Poisson Distribution [Example]

 The number of pieces entering to a production system


follows a Poisson distribution with mean 2 pcs/hour.
Simulate the arrival behavior of the pieces to the system.
e a a x
x P(x) F(x) Random digits px   , x  0,1,...
x!
0 0.1353 0.1353 0-0.1353
x
e a a i
1 0.2706 0.4060 0.1353-0.4060 F ( x)  
2 0.2706 0.6766 0.4060-0.6766 i 0 i!
3 0.1804 0.8571 0.6766-0.8571
Arrival ri Pieces/hour
4 0.0902 0.9473 0.8571-0.9473 Time
5 0.0369 0.9834 0.9473-0.9834 1 0.6754 2
6 0.0120 0.9954 0.9834-0.9954 2 0.0234 0
7 0.0034 0.9989 0.9954-0.9989
3 0.7892 3
8 0.0008 0.9997 0.9989-0.9997
4 0.5134 2
9 0.0001 0.9999 0.9997-0.9999
5 0.3331 1
10 0.00003 0.9999 0.9999-0.9999
41
© Kleber Barcia V.
Chapter 2

Convolution Method

 In some probability distributions, the random variable


can be generated by adding other random variables
Erlang Distribution
 The sum of exponential variables of equal means
(1/λ)

Y = x1+x2+…+xk
Y = (-1/kλ)ln(1-r1)+ (-1/kλ)ln(1-r2)+…+ (-1/kλ)ln(1-rk)

Where:
Y = Eri= (-1/kλ)lnπ(1-ri) The factor production
42
© Kleber Barcia V.
Chapter 2

Convolution Method

 The processing time of certain machine follows a


3-erlang distribution with mean 1/λ = 8 min/piece

Y = -(8/3)[lnπ(1-ri)] = -(8/3)ln[(1-r1) (1-r2) (1-r3)]

Piece 1-r1 1-r2 1-r3 Process Time

1 0.28 0.52 0.64 6.328


2 0.96 0.37 0.83 3.257
3 0. 04 0.12 0.03 23.588
4 0.35 0.44 0.50 6.837
5 0.77 0.09 0.21 11.279
43
© Kleber Barcia V.
Chapter 2

Empirical continuous distribution [inverse-transform]

 When theoretical distribution is not applicable


 To collect empirical data:
 Resample the observed data
 Interpolate between observed data points to fill in the gaps
 For a small and large sample set (size n):
 Arrange the data from smallest to largest
x (1)  x (2)    x (n)

 Assign the probability 1/n to each interval x (i -1)  x  x (i)

ˆ 1  (i  1) 
X  F ( R )  x( i 1)  ai  R  
 n 

x(i )  x(i 1) x(i )  x(i 1)


Where ai   1 / n is the relative frequency
i / n  (i  1) / n 1/ n
44
© Kleber Barcia V.
Chapter 2

Empirical continuous distribution [inverse-transform]

 Example: Suppose that the observed data of 100 repair times of a machine
are:

Interval Relative Cumulative


i (Hours) Frequency Frequency Frequency, c i Slot, a i
1 0.25 ≤ x ≤ 0.5 31 0.31 0.31 0.81
2 0.5 ≤ x ≤ 1.0 10 0.10 0.41 5.0
3 1.0 ≤ x ≤ 1.5 25 0.25 0.66 2.0
4 1.5 ≤ x ≤ 2.0 34 0.34 1.00 1.47

Consider r1 = 0.83:

c3 = 0.67 < r1 < c4 = 1.00

X1 = x(4-1) + a4(r1 – c(4-1))


= 1.5 + 1.47(0.83-0.66)
= 1.75

45
© Kleber Barcia V.
Chapter 2

Empirical Discrete Distribution

 Example: Suppose the number of shipments, x, on the


loading dock of IHW company is either 0, 1, or 2
 Data - Probability distribution:
x p(x) F(x)
0 0.50 0.50
1 0.30 0.80
2 0.20 1.00

 Method - Given R, the generation


scheme becomes:
0, R  0.5

x  1, 0.5  R  0.8
2, 0.8  R  1.0 Consider R1 = 0.73:
 F(xi-1) < R <= F(xi)
F(x0) < 0.73 <= F(x1)
Hence, x1 = 1
46
© Kleber Barcia V.
Rev. 1
Chapter 2

Acceptance-Rejection technique

 Useful particularly when inverse cdf does not exist in closed


form.
 To generate random variables X ~ U (1/4, 1) we must:
procedure : Generate R

Step 1. Generate R ~ U[0,1] no

Step 2a. If R >= ¼, accept X=R. Condition

Step 2b. If R < ¼, reject R, si


back to step 1 output R’

 This procedure is for uniform distributions


 Acceptance-rejection technique is also applied to Poisson,
nonstationary Poisson, and gamma distributions
----- HW 5 CHAP 2 -----
© Kleber Barcia V. 47

You might also like