You are on page 1of 66

Copyright © 2022 by Grzegorz Baran.

All rights reserved,


Edit: September 2022 Ver: 2

Thank you for your support


Table of Contents

Color calibration – main steps ........................................................................................................ 3


Color theory .................................................................................................................................... 7
What is the color? ......................................................................................................................... 13
Color perception for different species? ........................................................................................ 23
What is the Luminance?................................................................................................................ 24
Color calibration in practice .......................................................................................................... 24
White Balance ............................................................................................................................... 24
How to setup White Balance ........................................................................................................ 29
How to use Color Checker............................................................................................................. 30
Error in my approach for cross-polarisation calibration ............................................................... 47
Creating DNG Profiles ................................................................................................................... 47
Color Spectrometer....................................................................................................................... 48
PBR Color Reference List ............................................................................................................... 50
Linear vs Gamma color space ....................................................................................................... 52
PBR Color Reference List use in practice ...................................................................................... 56
Say No to Piracy ............................................................................................................................ 65
Color calibration – main steps
To make sure that the color of captured subject matches the color of its reconstructed digital version
we need to calibrate it. There are a few steps I follow to calibrate the color.

1-st – Setting Custom White Balance:

Before I start any capture in a new location with new lighting conditions, I usually set the correct
Custom White Balance in my camera. Details how to make it depend on the Camera type but usually
we need to take an image of the grey card with neutral grey color value, and set the camera to use
this image in Custom White Balance mode.

Img. 1. Custom White Balance set screen for CanonR camera

The image we use for this purpose shouldn’t be overexposed and should be captured in light
environment our capture is going to happen

2-nd – Color Reference Capture:

With the camera fully set for the capture, I take at least one image with the X-Rite Color Checker
Passport in the middle of it so I can use it later as reliable color and light reference. Usually this is the
first or the last image in entire series of images taken for reconstruction. This image also allows me
for easier visual separation between different image series within the same folder later.
Img. 2. Folder preview with images from the beginning of the capture with X-Rite Color Checker captured as color reference

3-rd – Color photo-editing:

I load all images for the entire series into the photo-editing application (usually Dx0 PhotoLab) and
proceed with some photo-editing tweaks.

Essentially, I start with White Balance color setup and re-apply it to all images in the series for certain
capture. I set it by selecting neutral grey color clip on the captured color reference with the White
Balance color picker.

I apply this White Balance setting to entire series of images for certain capture. This is usually enough
to get correct color values. If for some reason it isn’t enough and I get some artifacts, we can consider
DNG Color Profiling, but so far it was never the case for me.

4-th – Luminance level balance:

With the proper color setup we need to set correct luminance level. This is another case when X-Rite
Color Checker comes in handy. To setup correct luminance level I use the part of the Color Checker
which contains ‘steps of neutral grey’. Since we exactly know luminance value of each grading step, I
play with ‘tone curves’, ‘selective tone’, ‘gamma’ and everything else which helps to bring luminance
distribution to expected levels.
Img. 3. Part of X-Rite Color Checker Passport which contains set of 'steps of neutral grey' which can be used as a reference for
color luminance

At this stage I also apply few more photo-editing tweaks like blacks adjustments, sharpness tweaks,
vignette removal, denoising, chromatic aberration fix, luminance levelling and dynamic range
redistribution but. With all done, I export all tweaked images to use them as source for actual
photogrammetry reconstruction.

5-th – PBR Color Reference List comparison:

After the reconstruction is over, I compare final albedo with values from the PBR Color Reference List
and re-tweak these if reconstructed values are for any reason out of acceptable range. At this stage I
mostly tweak luminance value, but also readjust color if needed. As no color is ever 100% solid and
differs even within the same substance, I consider values in +/- 10% range of these from PBR Color
Reference List as totally fine. If they are not, I seek for the reason and if there is none, I bring them
into measured range.

Sometimes, to make sure that the value I captured is correct, I also measure the color of the actual
substance with the Color Spectrometer and reference the final reconstructed result with measured
values. But the bigger and more comprehensive the PBR Color Reference List I use is, the easier it is to
skip this step.
Img. 4. Examples of some measurements taken with color spectrometer for further color calibration

6-th – PBR comparison test

There is one more step worth to know for color calibration: comparison of HDRI background with the
final material. This step can be processed in any 3D application or engine of our choice, which supports
real time HDRI light preview. In this step we need to apply material to any 3D object in a scene with
HDRI environment used for scene illumination visible, and simply use our visual judgment to make
sure that material doesn’t stand out any way and matches the scene parameters, therefore look
natural in the scene.

Img. 5. Marmoset Toolbag 4 used for HDRI comparison calibration step.


For this purpose, we need to be sure that the HDRI map we use has proper light distribution and
material we use has full basic setup. If we are still not sure, we can swap between different HDRI maps.
Well made material should always match and look natural in every HDRI environment and shouldn’t
be too bright or too dark.

And that’s it!

Unfortunately, in practice it is not that simple and each case slightly differs. No substance in nature is
purely uniform and its color depends on the exact surface type as each respond to the light in a slightly
different way. To make sure we get the most of color calibration, it is really worth to understand the
actual concept of color, what it really is, how it behaves depending on the surface it illuminates. And
when we do, calibration should be much easier and make sense.

No substance in nature is purely uniform from the color point of view

Color theory
From my experience I can say that it is simply impossible to measure and mimic color of any substance
which exist in nature with 100% accuracy. It’s impossible because nothing in nature is perfectly solid
and uniform and even if we use super sophisticated laser equipment and compare measurements
from two spots which are just 0.01 mm away to each other, the result still should slightly differ. Of
course, it doesn’t mean that we shouldn’t try even just to be close enough, and I think this is the main
purpose of color calibration: to setup the color value of reconstructed data as close to its original that
we can consider it as close enough.

Color of our surroundings is really very easy to misinterpret without any reference next to it. If we
take a sheet of white paper for example, it will probably look white to us even if it is coloured by light
bounced from surrounding elements. How much relative perception of our mind really is can be
exposed when we compare what we see with actual color reference samples. I made a special image
just to show how relative color can be depending on the context it is being seen. Please take a look on
these two boxes and say which one is brighter and which one is darker.
Img. 6. Relativeness of perception

Next please use your finger to cover the middle line. As you can see both boxes have exactly the same,
grey color value. None of these is brighter or darker to another.

It happens because our mind expects white paper sheet it to be white. Snow looks white to us the
same way, even if it is coloured by blue sky above. The same works for cameras as they usually just
guess the color. And as long as we don’t have any actual color reference we can refer to; it is totally
normal.
Img. 7. Common outdoor light pollution.

Photogrammetry reconstruction can be only as alike to its original subject, as captured data is. This
is why we need to make sure that the image data we use for reconstruction reflects the original
state as accurate as possible. Main elements we capture for photogrammetry is the spatial (height)
and color information. During photogrammetry reconstruction the spatial information is being
transformed into digital 3-dimensional group of points. Color information is being used to colorize
these points and reproduce color of original subject’s surface.

Any color we see is always more or less polluted by surrounding lighting environment

Unfortunately, due to camera’s constraints and pollution caused by surrounding light, it’s not so easy
to capture true color. All surfaces, especially when captured outdoor, get affected by many light
related factors. They are often coloured by blue tint which comes from the blue sky. They get yellow
shade coming from the sun. Or they simply get coloured by light bounced from other surrounding
elements which adds even more color variety and inconsistency. Not to mention that what we and
camera see, are surfaces which contain all these visual components which are not part of original color
information like shadows, environmental reflections, specular, glare, emission or transparency.

Here is an example of beachy rocky surface before and after it was stripped from any additional
environmental information so we can see its pure color. The top left part of the image shows rocky
surface affected by usual environmental light factors. The blue tint comes from the blue sky. Since its
shiny and more reflective in polished or wet parts it also reflects some sky on it. Of course, it contains
some ambient shadow as well as some direct one cast by sun. In real life, all that environmental
information is useful to identify basic surface features. Shadow allows us to perceive overall surface
depth and roughness. Makes silhouette easier to read. Specular, glare and reflections tell us how slick
the surface might be if we step on it. All these visual elements are natural for us, but also are usually
very environment related and for photogrammetry considered as light pollution. When captured for
reconstruction they usually bring some surface noise, cause glitches and result in some reconstruction
inaccuracy.

Img. 8. Rocky surface look with before and after it was stripped from environmental information

To make sure the surface information reflects the original color, we need to strip it out from all that
additional light pollution and focus on pure albedo if possible.

Img. 9. An example of white bounced light affecting surrounding elements of different color

Level of each factor affecting the original surface information depends on the actual surface type and
how exactly it responds to the light. There are really many different surface types and they react to
the light in a totally different way depending on their physical and chemical structure. Some of them
are reflective and reflects most of environment, some are transparent and some are matt.

Img. 10. Examples of light response for different substance types.

What we perceive as color is a final, averaged state of complex process of light response after it
interacts with different surface types. In reality things get even more complex as in nature none of the
substance is ever pure and they often are formed in different density structures. And even surfaces
which seem to be made of single substance type, are really made of many micro elements where each
respond to the light in a different way.

Img. 11. Example of fabric color seen from a distance and its actual color complexityif if magnified

If we use a fresh tangerine as an example and take a look on pulp’s surface color, we will say that its
orange. The true is that we don’t actually see just the surface since its half transparent, but we
perceive also what is behind it. So, together with the thin skin we also see all the light scattered
through the water a tangerine is made of and even if the actual pulps skin color is kind of greyish, we
clearly see that this tangerines pulp appears as orange to us.

So, what is the actual color of tangerines pulp? In reality since its significantly more transparent to
most dielectrics, its perceivable color is way trickier to measure and define. To reproduce such object
in CG we would need to simulate way more things than just simple standard light reflection and
engage quite sophisticated rendering and shading systems into it. Since these types of simulations can
be very complex, very time and resource consuming, we usually simplify them by faking many of these
factors and instead of setting actual measured greyish color of skin, defining density, scattering,
refraction and color of substance which takes part in light response, we usually set this surface simply
to be orange and add some simpler half-emissive-like effect which fakes the subsurface scattering for
such substances.

Below is an image of measurement taken with the color spectrometer. The color spectrometer
illuminated the surface to measure value bounced, but as you can see, some of this light passed
through the surface deeper into the pulp structure, and illuminated it from behind. Due to huge light
lose in the subsurface process, luminance of returned scattered light per surface point is way smaller
to the bounced one, therefore this color value appears as less significant for the measurement.

Img. 12. Tangerine's pulp color measurement with Color Spectrometer

Here is the image of the exact, measured value for tangerines skin, inner peel and actual pulp. Of
course, from reasons we discussed the pulp value is totally different to what we perceive with naked
eye and tangerine would look incorrectly if we would use that value without adding entire simulation
for light behaviour to add missing, not measured color component for the final color.

Img. 13. Albedo values of tangerine measured with color spectrometer

So even if we strip the color information from shadows, specular and reflections, sometimes it still can
be pretty tricky to measure, therefore measured value isn’t always 100% correct and can be really way
off for tricky (transparent, reflective, shiny, emissive) types of substances.
Color is relative and we need a reliable color reference to measure, compare and calibrate it

This is why we cannot purely rely on captured color information, but we need to additionally calibrate
it during the PBR reconstruction process.

There are two things we need to calibrate to make sure the color we capture is accurate to reality.
These are color itself usually represented by three, red, green and blue values and surface brightness.
Due to actual surface complexity and surfaces variety, there is no a single tool which does the full job
for us. Depending on surface type, we need to consider to utilise different measurement methods
together and compare results from each to get the most reliable results. To make it right, we should
really understand what the color really is and how camera captures it.

What is the color?


In plain words, the color is a frequency of the light interpreted by our minds. The color itself isn’t a
real thing at all and is very abstract. Different species perceive and interprets it on many different
ways.

Sun essentially emits all colors, also those we can’t see as are out of human perception range, mixed
together. All these visible colors combined appear to our eyes as white. Here is an image of light split
when it passed through prism. It is possible because each frequency passes through different
substances with different speed

Img. 14. Sun light split into rainbow colors when passes through glass prism

Usually when the light passes through different substances, some of it gets absorbed while some gets
bounced back. Our mind interprets frequency of those left as a certain color.
Color isn’t real. It’s just an interpretation of wavelength frequency in electro-magnetic field

So basically, when any surface is being hit by the light, it reflects some of it in form of specularity,
scatter some by randomising its direction, absorbs some turning it into a heat resulting in decrease of
perceived light intensity, and reflects some. Mix of these remaining reflected wavelengths are what
we see as a color of the certain surface.

To better understand what really happens with the light we need take a look on subatomic level and
understand what happens when the light hits atoms. Everything around us is made of molecules.
Molecules are atoms connected together into molecular structures. Atoms contain electrons.
Electrons have a tendency to vibrate at specific frequencies. These frequencies differ per atom and
molecule type. There is certain frequency for each atom at which atoms tend to vibrate. When a light
wave with that same frequency impinges upon an atom, then the electrons of that atom are being set
into vibrational motion. If a light wave of a given frequency strikes a material with electrons having
the same vibrational frequencies, then those electrons will absorb the energy of the light wave and
transform it into vibrational motion. During its vibration, electrons interact with neighbouring atoms
in such a manner as to convert its vibrational energy into thermal energy or get re-emitted in form of
totally new frequency (for example ultraviolet-based re-emission). Here is an example of fabric
material composed of 40% Viscose, 22% Polyester, 20% Cotton and 18% Linen. Left part of the image
shows how it looks like when treated with standard white light. It absorbs some wavelength and
bounces some so it looks yellowish to human eye. The right part of the image shows the same part of
the same surface treated with invisible for human eye light frequency (radiation at 365nm) which
causes certain atoms to re-emit some of the energy back in form of visible for us spectrum (400-
700nm) which appears to human eye as emissive glow.

Img. 15. fabric material composed of viscose, polyester, cotton and linen treated with white 400-700nm light (left) and 365nm
(right)

Every light’s wavelength frequency with gets absorbed by the object, never again is being released in
form of its original light frequency.
Img. 16. Light response for non-metal (dielectric) surfaces.

Reflection and transmission of light waves occur because the frequencies of the light waves do not
match the natural frequencies of atoms vibration of objects they are made of. When light waves of
these frequencies strike an object, the electrons in the atoms of the object begin vibrating. But instead
of vibrating in resonance at a large amplitude, the electrons vibrate for brief periods of time with small
amplitudes of vibration and then the energy is re-emitted/reflected as a light wave. If the object is
transparent, then the vibrations of the electrons are passed on to neighbouring atoms through the
bulk of the material and re-emitted on the opposite side of the object. Such frequencies of light waves
are said to be transmitted. If the object is opaque, then the vibrations of the electrons are not passed
from atom to atom through the bulk of the material. Rather the electrons of atoms on the material's
surface vibrate for short periods of time and then re-emit the energy as a reflected light wave. Such
frequencies of light are said to be reflected. A solid material will appear transparent if there are no
processes that compete with transmission, either by absorbing the light or by scattering it in other
directions.

Dielectric (non-metal) materials are all transparent to some degree. It means that they let the light
pass through them while some is being absorbed, diffracted and scattered in the process of passing
through them. The denser and the thicker certain substance is, the more light gets absorbed when
passing through and less transparent material appears to human eyes.
Img. 17. Light transmission when passing through fibre-structured substance

It is worth to understand, that the light isn’t some kind of particle which travels through space. It’s a
wave of energy which pushes things and interacts with them in similar to water waves way. Same as
water waves, radiation waves also have different amplitudes, length and energy. Their vibration is as
natural as vibration of water, and every single object in the universe, even our own bodies emit some
of that energy on certain level.

Light isn’t a particle, it’s a wave which travels through electromagnetic field
Img. 18. Waves of different frequencies traveling through water medium.

Unfortunately, human eye can see just a very small cut of it. Frequency of visible to human eye light
wave covers barely waves between 400nm to 700nm length as according to nature it’s enough for us
as human beings to survive on this planet and we don’t need really to see more.

Img. 19. Wavelength frequency reference chart


There are many different waves intersecting each other in a water at the same time. The main
difference between the water and light waves is the medium. While the water waves travel through
water, the light travels through electromagnetic field. Same as there is no significant water movement
when the wave travels through it, there is also no electromagnetic field movement following the wave
travel through it.

Even if most periodic elements are metals, most of our surrounding is made of non-metal (dielectrics)
elements. Metals and non-metals respond to the light in a bit different way.

Img. 20. Red mohair seen in different magnification levels - field of view 100mm, 5mm and 1mm.

Non-metals (dielectrics) are more or less transparent and transmit the light through them, absorbing
or re-radiate some in the process, metals unlike non-metals, don't transmit light but only absorb and
reflect part of it instead. In practice if we take a thin sheet of metal and thin sheet of non-metal and
put a source of light behind them, no light will pass through a metal surface while some light will
through a non-metal one.

All non-metals are always more or less transparent for visible light
Img. 21. Light response for metal surfaces

Even rocks and stones are transparent to some degree and if we use a strong light source and thick
stone, we should be able to see some of light passing through it.

Img. 22. Example of transparency of white dolomite gravel. In the middle when it with ambient light and to the right when
illuminated with a strong light from behind.

This transparency on a micro level is even more obvious when we look closely on sand which is
basically a bunch of small sized rocks of different types mixed together.
Img. 23. Sand seen in different magnification levels - field of view 100mm, 5mm and 1mm.

Img. 24. A piece of standard white sheet of paper seen in different magnification levels - field of view 100mm, 5mm and
1mm

To compare, this is a metal surface captured from 12-sided 1 Pound coin. It is made of two metal rings.
Both the outer and inner ring are formed from alloys (mixtures of metals). The outer ring is made of
nickel-brass, a combination of nickel (4%), copper (76%), and zinc (20%), while the inner ring is
described simply as a ‘nickel-plated alloy’

Img. 25. A 1 Pound coin made of two metal alloys - seen in different magnification levels

Because metals don't transmit the light but reflect it, it is hard to measure their actual color. What we
really see looking on a metal surface is a reflection coloured by absorbed wavelength. It is a bit like
trying to estimate the color of mirror by looking into it.
Metals aren’t transparent at all for visible light

A human eye is quite simple and has just 3 types of light sensors. The one which sees longer wave
lengths, the second which reads medium wavelength and the third one which sees a short one. All of
these overlaps some of their range a bit. These sensors are called cons and corresponds to exact color
our mind perceive. Basically, if the light is perceived by a cone which sees long wavelength, the mind
translates it to a RED color. Green from the medium wavelength one and blue from the short one. This
is where the idea of RGB colors comes from. When the wavelengths overlap and are read by different
sensors/cons, our mind interprets them as sub-colors, and this is why we see yellows and greens.
Human eye isn’t equipped with the dedicated sensor which reads these light frequencies.

Img. 26. An examples of red strawberry which looks red because it absorbs all colors except red

Of course, in reality, strawberry has much more complex light response to the one on this picture,
but I hope you have got to the point I made.

Img. 27. Red strawberry seen in different magnification levels - field of view 100mm, 5mm and 1mm
This mix-based way of color perception is called an additive RGB model and is commonly used around
us. If we zoom in to see the LED screen lights, we won’t see any yellow or green, just red, blue and
green. And any LED screen simulates different colors by using this additive RGB function and mixing
these 3 basic colors together. The color ‘mixing’ happens in our vision system's neurological
processing.

Img. 28. Mobile phone’s AMOLED screen seen in different magnification levels.

Because the color is just a light frequency, even if our minds can interpret it differently, from the
physically point of view it can be measured quite accurately with the special devices called color
spectrometers.

Img. 29. A color measurement of sand taken with the color spectrometer

The simplified RGB model is usually presented by 255 numbers for each color channel. So, we get 256
brightness variations of color red, 256 of green and 256 of color blue. 0 represents the darkest color
variation which is pure black while value of 255 represents the brightest color variation of certain color
which is pure white. Everything between these values is a color transition from black to white. Because
0-255 (8bits) per color gives us just 256 color options per color channel, it is in practice very limited
and inaccurate grading. This is why we usually use much higher color depth instead (16bits or even
32bits) and present it in float numbers in 0 to 1 range. In this approach 0 means the darkest, black
value while 1 the brightest, a white one. Using 8bits approach helps to keep numbers down in color
management, especially that any LCD screen works just in 8 bits mode and isn’t even able to present
more values anyway.

It is good to know that there is another color model, called CMYK. This one is called a subtractive
model. In this color model we use another phenomenon where the cyan, magenta, yellow and black
remove certain wavelengths of light, reflecting back only the narrow range associated with each. This
color mode is used mostly for physical, paper printing while RGB mode is useful to present images on
digital screens.

Color perception for different species?


The perception is mostly neurological in nature and is not absolute. Every species perceives and
interprets color on its own way. Also, the wave length perceived by any human eye doesn’t cover
entire visual spectrum and is actually quite limited.

This is how other species might see the world compared to humans. Animals usually see less colors
but in larger spectrum to humans – their perception range is richer by some of ultraviolet and infrared
information invisible to us. They also have way better senses to humans, so they kind of see a bit out
of time and by perceiving different scents, they can see traces of things which happened already. I
marked these with some reddish smudges on images below.

Img. 30. Diference in color perception between humans, dogs, cats and birds (theorethical).

To be honest, our planet isn’t really blue. We see it as blue because our atmosphere scatters blue
color when the light passes through it. But in reality, it scatters ultraviolet even more, so the earth
actually much more ultraviolet then blue. We just can’t see it as ultraviolet is out of our perception
range. This way birds, which see quite large chunk of ultraviolet spectrum, might even perceive the
wind flows as due to scatter, ultraviolet atmosphere would look denser to them.

And this is why any human color interpretation when we deal with color setup is so imperfect and
inaccurate. It depends on our natural limitations but also the environmental influence, but also on
computer screen we use and its physical limitations. The only way to get correct color setting, is to
measure it and use its measured value as a reference. The easiest way to do it is a measurement made
by comparison to already measured color values and use these measured numbers instead of our
eyes.
What is the Luminance?
Luminance is a linear measure of light, which is spectrally weighted for human vision, but not
perceptually weighted in terms of lightness to darkness.

Human eye has three types of cones that are sensitive to red, green and blue light.

But our spectral sensitivity is not uniform and the eye is the most sensitive to green color and perceives
it as the brightness one. Red is in the middle while the blue is the darkest one.

This is why to turn any color into a grey-scaled value we cannot just sum red, green and blue values
and divide it by 3 to get an average, but we need to treat each color respectively.

So relative to the total sRGB white, green is 71% of the total luminance measure, red is second at 21%,
and blue is a distant third as a 7% of luminance.

So, to get the exact luminance level of each perceived color, given the different perception accuracy
for the luminance value, we have to treat each color with its corresponding cone values: 0.2126 for
RED, 0.7152 for GREEN, and 0.0722 for BLUE.

Next to bring 8bits RGB (0-255) numbers into float values we would need to divide each color by
255. The final equation which would give us exact, grey-scaled luminance value per 8bit RGB color
would look like this:

0.2126*(R/255)+0.7152*(G/255)+0.0722*(B/255)
And this is the equation I used while setting up the Relative luminance value for the Reference Color
List.

Color calibration in practice


The purpose of color calibration is to make sure that the value of color we reconstruct represents the
original value of color we capture. We can do this by comparison of captured color with reference
color samples and tweaking them until they match. There are two values we need compare: an actual
color and its saturation and actual surface brightness, called luminance.

White Balance
The easiest way for initial color calibration is to use calibration feature built into most cameras.
Different cameras interpret color differently, not to mention that different lenses have also different
color characteristics. This is where White Balance setting comes in handy. We use it by taking an image
of white/grey neutral surface and telling the camera to use this image as a color reference. This way
camera shifts all colors the way they match the actual physical color reference.
A color reference used for camera calibration is called a WHITE or GREY CARD. This card isn’t and
shouldn’t be ever white as we cannot White Balance anything if any of the RGB values are at the
maximum limit with no tone value. It is much easier to expose a light with a grey card so that the
values are all very light, but not so light that they don’t have any tone.
Img. 31. X-Rite Color Checker white/grey card we use to setup Custom White Balance

We need to capture color reference or set a new Custom White Balance every time we change to a
different lighting condition.

If there is a blue tint coming from the sky, colorising the white/grey card with bluish tint, camera which
knows that this card should be neutral grey, shifts its setting the way the blue tint is gone and the
captured card looks neutral white/grey. Of course, the shift is applied to every entire image taken with
this Custom White Balance setting.

While it’s a good practice to set proper White Balance level before the capture, we don’t actually need
to do this. White Balance can be set later in any photo-editing application as long as we have a color
reference with neutral white/grey captures on at least one image.

As said, to calibrate color we need any reference object with neutral white or grey color captured.
Probably the best and the most popular color reference for photography is an X-Rite Color Checker.
Its color clips were specifically designed to give the most neutral and useful light response possible.
So, they aren’t over-reflective, don’t get glare and specular when illuminated. Color of each clip is
measured in laboratory, replicable and guaranteed. At least for 2 years, because each surface, with X-
Rite Color Checker included fades in time when exposed to external factors, and X-Rite guarantees its
correctness for 2 years only.

In practice, as reference we can also use any other object, we trust to be neutral white or grey. It can
be matt sheet of white paper, white shoelace or even clean shoe rubber. Even some background
elements like white wall, white bench or white street sign should do.
Img. 32. Examples of elements that can be used as a color reference of last resort

Just bear in mind, that not everything we believe is white is actually white, and if we pick some
substance which isn’t white and tell the camera or photo-editing app that it is, it will adjust all the data
accordingly using this info as a reference, therefore if reference wasn’t correct, everything will be
shifted in a wrong way. Here are some examples of different surface types which appears white to us
with actually measured values. In the middle I put the square with pure white color for comparison.

Use any environmental element you believe represents white if you didn’t capture grey card
Img. 33. Comparison of surfaces which appears white with their actual measured color values

It’s also worth to keep in mind that these substances differ depending on what they are actually made
of. Also, different substances have different reflectance levels and responds to the environment in
slightly different way. Fabric is matt while snow half transparent. Porcelain is reflective and gets some
glare. Not to mention that different types of even the same substance also have slightly different
colors when compared and salt of one brand will probably differ to another one. This is why it is
important to make sure that the tool we use as color reference as reliable as possible, because when
we leave the capture scene or change camera’s white balance setting, or even if just the weather and
environmental conditions change, we won’t be able to recapture the color reference anymore.

To get better understanding of how tricky the judgement might be or how the camera's judgement is
not perfect without real reference I compared two images.
Background is an image with White Balance Page (RGB 201,201,200) used to set Custom White
Balance later. It was used to inform the camera: 'hey, it is grey!'. On the image it is (RGB 146, 134, 125)
It is significant difference in values, but thanks to that info the camera was able automatically adjust
shooting settings to get more accurate values.
Camera can be really far away from truth. Here is another example of difference between Auto
White Balance and Custom one:

This is why I strongly advise to set Custom White Balance for each capture series. It is also good to
know that it makes a very big difference if we save image as JPEG or RAW. Basically, every single tweak
and correction made on JPEG affects the image quality, while changes made on RAW file are lossless.
The reason for that is that JPG stores very limited amount of color (8bits - from 0 to 255) while RAW
files are much denser and can usually store 14bits and more. In practice when we store our images in
RAW, we can use the information from the Color Checker anytime later and tweak White Balance
setting without any quality sacrifice.

Our world is really rather brown and just illuminated by blue


And to be honest, after I started my color measurements, I have realized that the world around us is
really very brown. I believe it’s because everything what surrounds us is made from the dust so
everything more or less is dusty.

Img. 34. Capturing X-Rite COlor Checker Passport as a color reference for PBR Color Reference List

How to setup White Balance


Each camera has a slightly different way to sat the Custom White Balance, but all follow the same
principles. To set a Custom White Balance in any camera we usually need to capture one image with
the Grey Card on it, and set the camera to use information from this image to set the Custom White
Balance levels:

Img. 35. Canon R - Setting Custom White Balance with X-Rite WB Card
After its done, every next image captured after the Custom White Balance was set, will be adjusted
accordingly.

We can also setup White Balance later in photo editing application. For this reason, we also need a
color reference to be captured. It can be a grey card as above, color checker, or anything which
contains any substance we believe its neutral grey.

How to use Color Checker


When we plan to use Color Checker for image calibration, it should comprise at least 10% of a 10-
megapixel image which is about 1000 by 1000 pixels of actual digital image resolution. It may be
rotated within the image, but it should be placed parallel to the plane of the lens and if possible, in
the center of the image frame. The lighting of the color reference should match the lighting of the
subject of the image and it should be lit evenly without any strong shadows or reflection on it. It has
to be always properly exposed to the image and always in focus. When over or under exposed we lose
part of the information needed for proper color and luminance referencing.

Img. 36. X-Rite Color Checker clips we can use to set up White Balance. 1st option is the first choice but 2nd is also possible

To set Custom white balance you need to use a grey-calibrated surface (the one I use from color
checker is exactly RGB 201,201,200) and select the neutral grey area on the image with the White
Balance Color picker. I marked patches with best neutral grey values for White Balance setup with
number ‘1’ on the image of color checker.
Surface should be illuminated with exactly the same light as our target. We basically use it to capture
actual information about light pollution and light conditions. When we want to capture ground
surface, Color Checker should be put on the ground. When we want to capture a wall, we should place
the color checker parallel to it. When the subject surface is hidden in shadow, grey card should be
hidden in the same shadow.
As we know exact value for grey’s card RGB, it is easy to read which way color shifts. And when we
take an image and instead of having RGB201,201,200 we have RGB201,201,240, we know that the
third - blue value should be decreased to bring it down to a neutral level. White balance can be set,
changed or modified anytime later when image is being stored as RAW file.

Color picker and histogram are very powerful tools

This setting gives us very high color accuracy, but unfortunately the White Balance setting doesn’t
cover brightness and luminance level needs to be adjusted separately.

The dynamic range is the higher overall contrast that can be found on the image. The human eye can
differentiate the contrast up to 10,000-1 which is a dynamic range of around 14EVs. But it goes even
further as the human eye can adopt to almost any lighting situation stretching the total perceivable
range to about 1,000,000,000:1. For an example the sun us million times brighter to a candle light but
we still can see perfectly walking in a sunny day as when we are in a church. Such extreme contrasts
cannot be seen at the same time and this is why our eyes adapts to the intensity of light and cover
just a small 14EV range of it at the same time.

And the camera works exactly the same. Modern cameras can cover a maximum dynamic range of 5
to 8EVs so it’s a half less than a human eye range. Same as the human eye, the camera also has to
adopt to get the most information within available dynamic range

When we shot with the camera, the camera aims to set the exposure to utilise the entire dynamic
range from 0-1 and put it in the middle of the range so nothing is being cut from the top and from the
bottom (over and under exposure). If we shot black surface, it will appear grey. But if we shot a white
one, it will also appear grey.

To get better understanding of camera behaviour during the capture I made a small experiment and
printed some checkers with different grey coverage. Black and grey, black and white, and grey and
white:

Next, I took photo of each with the camera set for autoexposure. 1-st checker was printed with
white(255) to grey(128) range and this is how camera interpreted it:
The 2nd one was printed as a set of black(0) and white(255) squares. And this is how camera
perceived it:

Squares on the third one, were printed as middle(128) and white(255). And this is how camera
interpreted it.

As you can see, it might be hard to tell values any of these 3 examples represents. The first one and
the third one look almost the same. Even if the experiment isn’t perfect since I used a standard paper
sheet and a crappy ink printer to print these, it still proves the point that the camera does the best to
utilise its full available dynamic range regardless the actual true. This is the reason why we need an
actual brightness reference, as without it we just don’t know what brightness level was captured. And
this is where the X-Rite Color Checker comes in handy. Everyone who captures materials knows how
important the reference can be in further process. This reference approach should be applied not just
to color and luminance, but also scale (rulers) as even environmental light reference (chrome and grey
spheres) if needed.

X-Rite Color Checker is a good reference for both, the color and the luminance values as each of its
color patches were measured accurately already and we know exact values they represent when
illuminated with D65 lighting. D65 corresponds roughly to the average midday light in
Western/Northern Europe (comprising both direct sunlight and the light diffused by a clear sky),
hence it is also called a ‘daylight illuminant’.

All color patches have also spectral reflectance intended to mimic those of natural objects such as
human skin, foliage, and flowers and to have consistent color appearance under a variety of lighting
conditions, especially as detected by typical color photographic film.
Img. 37. X-Rite Color Checker Passport

By capturing the color checker in any lighting environment and analysing how it was affected by the
surrounding light, we know the level it was affected. Next, since we exactly know all color values in
their neutral state for each color patch, we can simply re-shift them back to their desirable state
removing this way entire color impact of environmental light.

Img. 38. X-Rite Color Checker color group types


Color Checker contains 2 main reference parts – color and brightness references

Img. 39. Individual color values for each X-Rite Color Checker's color clip

It is worth to mention that published values given by X-Rite Color Checker are non-linear. They are
sRGB which is good for us as long we work in sRGB color space.

At least this is how it works in theory. In practice everything was fine until I started capturing materials
with the ring flash and cross polarisation to get rid of shadows. After a while I realised that the results
differs and that X-Rite Color Checker wasn't designed with cross polarisation in mind. I have noticed
that the light reflects from it in a different way when cross and non-cross polarised.

Color checker wasn’t designed with Cross-Polarisation in mind

While colors were still pretty accurate, I have noticed that the distribution of luminance measured by
comparing grey scaled patches from ‘steps of neutral grey’ area differs a lot and I decided to measure
it so I know how should I tweak images later to get the actual true result.
Img. 40. Sketch chart of luminance distribution depending on lighting setup used

So, to measure it I captured just the color checker using different lighting options so I could compare
the results I get

I took 4 images. All were taken in sRGB color space with Custom White Balance set.

1- Clean capture with auto exposure. No flash was used. This is standard image capture without any
polarisation filters
2- Cross-polarised image with the flash and two polarisation filters used in Manual setting
3- With 2 filters mounted angled to each other in 90 degree to emphasize glare reflections – useful
to capture surface reflectiveness and subtract it from cross polarised images to create reflection and
specular maps

4- And the image just with the direct flash light used, without any polarisation filters involved.
Next, I compared them all together to measure how these changes affects the actual X-Rite Color
Checker values. Next for better visual comparison, I put these results next to each other as series of
combined patches in form presented below for each:
And this is the first comparison without any image post-processing:

And the comparison after I tweaked the luminance using right side grayscale values from the color
checker.

Img. 41. RGG 52 patch used to set shared bottom luminance value
Basically, I tweaked all images by pushing blacks as this is in my opinion a very good initial way to get
rid of shadows and flatten colors for captured images. I also manually tweaked each image to make
sure that the bottom, darkest initial luminance value - RGB52 is the same for each one, as it makes
entire distribution comparison much easier. And here is the final result:

While overall color values are fine, their saturation and luminance level clearly differ. For better
visibility, I presented these luminance differences measured for each patch of ‘neutral grey steps’ used
for luminance measurements on the chart. This chart represents the grading distribution for the first
case with the originally captured, pure color values without any photo-editing tweaking:
And another chart, after I tweaked the initial RGB52 value and applied BLACKS removal I usually use
to get rid of shadows:

It is also worth to mention that the Color Checker values depends on the Color Checker position in the
frame we capture. I took a few shots with and without cross polarisation, and without changing any
camera and light setting. Color position was the only thing I changed:
Img. 42. X-Rite Color Checker Passport captured when positioned in different part of the image frame (top row - cross
polarised capture, bottom row – direct flash only without cross polarisation)

Next, to simplify the comparison I measured just the BLACK (RGB 52,52,52) and WHITE
(RGB:243,243,242) patches for each passport’s position case and compared them together in form of
chart:

With the flash used, any image is usually brighter in the center of the frame as the light dims at sides
which is purely reflected by numbers. Color Checker positioned in the center of the images is
significantly brighter when its placed near the frame edge. Surprisingly the darker values aren't
affected as much when cross polarised.

What is the best position for the color checker then? I would say that position which is the most
representative and makes the most sense is the center of the image frame. This way we can capture
the less distorted data. Center of the frame gives us the most representative and consistent light
distribution across the color checker. It is also the area with lowest barrel distortion levels and weakest
lens vignetting.

Summary points:
Captures with the flash without cross polarisation as well as with the flash and cross polarisation gives
the most accurate values from the color and luminance point of view and needs less tweaking.

Cross polarisation cuts some information in darker shades so we need to be very careful to do not get
underexposed images.

Cross polarisation increases the saturation level which needs to be decreased later during the photo
editing stage.

Cross polarisation increases the contrast which can help to bring more details to the reconstruction.

For complex and deep substances, I recommend to use the BLACKS slider in photo editing software to
flatten the color and remove ambient occlusion shadows. It can be useful to flatten the color even
when we use flash for complex surfaces like rocks or gravel.

Next, we can proceed with luminance calibration


Recommended calibration treatment:
set correct luminance level with the GAMMA and/or EXPOSURE using set of grey scaled clips:
We should adjust values until the darkest BLACK value is around RGB 52, 52, 52 and WHITE around
RGB 243, 243, 243.

If we want to get really accurate result, we can also tweak middle values with 'curves' to get even
better match but as you can see on charts, after the first step they should be almost fine.

Example of curve tweak for cross polarised capture:

or for the one taken with the FLASH only


This is the curve shape used to calibrate clean shot without any flash and polarisation

If the capture was cross polarised, we need to adjust the saturation level using color clips values as
color reference. These usually need to be decreased around 15-20% to get proper saturation level.
Next, we can slightly tweak other values like micro contrast, remove vignetting, remove chromatic
aberration etc. We should not tweak any image geometry (barrel distortion removal etc.) as the
photogrammetry software is going to make it better for us.

Next when the calibration is finished, we can compare our albedo with the PBR Color Reference List,
which is the chart of physically accurate, measured PBR color values and see how our value stands
within it.

Error in my approach for cross-polarisation calibration

After I released this book, I was informed by some folks that my approach for cross-
polarisation might not be 100% correct.

It is because X-Rite Color Checker isn’t fully dielectric and some of metal components was
used to make its patches, therefore it reacts to cross polarised light in a slightly different way
then surrounding non-metal dielectric environment. In result I need to process a few more
tests to research this part better and update it in this book after.

Creating DNG Profiles


With an X-Rite Color Checker captured, we can also consider calibration through custom DNG profiles.
This way we can get true and optimal color quite easily as photo editing application will analyse the
image with color checker in frame and make all adjustments for us automatically.

Of course, this way we need to create a custom profile that is specific to the camera and certain
capture lighting conditions. In this case we don’t need to set any White Balance manually as
application will do that for us. This way the image is being calibrated with all color clips taken into
consideration. Technically it should be the way to get the most accurate calibration but I consider this
approach only in cases when I can clearly see that something is really off with prevous, White Balance
based calibration.

It’s just worth to know that the Color Checker should comprise at least 10% of a 10-megapixel image
to be useful for DNG profiler.
Img. 43. Interface of ColorChecker Camera Calibration application.

To create DNG profile we need to save our image with Color Checker on it as DNG. Next we should
run Color Checker Passport Desktop Application and use this image to generate DNG profile. With the
profile ready we can apply it in any photo editing application of our choice which supports DNG
profiles. The Color Checker DNG Camera Callibration tool is free and can be downloaded from X-Rite
page.

Color Spectrometer
Visual color calibration based on X-Rite color checker isn’t ideal and we can’t be ever sure the color
accuracy. The only way to know what the color is, is to measure it with the tool which measures light
frequency. The tool made for this exact purpose is called a Color Spectrometer. While in theory we
can use even our mobile as a color spectrometer, in practice its very hard to get reliable results this
way. Professional color spectrometers can be pretty expensive, and even though, due to complexity
of nature and complexity of light response, there is still never 100% guarantee that measurement we
take with them are 100% correct and should always be taken with some grain of salt.
Img. 44. PCE-RGB2 Color Spectrometer

The color spectrometer I use is called PCE-RGB2 and it is designed to measure dielectric (non-metals)
surfaces only. It illuminates the surface with 2 white LED lights under the 45-degree angle and
measures the light which bounces back. This incidence angle helps to take most of glare, specular
and reflections out of the equation, therefore outputs pretty accurate results for non-metal surfaces

Img. 45. Front of measurement devide with side LED lights and inner point-hole used to capture color data.

As a result of measurement, this color spectrometer outputs 6 linear values, one for red, one for green
and one for blue with 0-1023 accuracy each. The other 3 are HUE, saturation and luminance with 0.000
to 1.000 accuracy. All these values come in linear space, so to make them useful for computer displays
we need to convert these values into gamma space first.
PBR Color Reference List
Since there are no ultimate and absolute values for any type of material as in nature nothing is ever
perfect and each substance differs to some degree, any, even uniform material, still has many different
shades and color variations when analysed thoroughly.

There is no ‘the only one’ color of anything as everything vary.

Img. 46. Leaves color diversity within same plant type

There are also many different types of the same kind of substance across the globe which differs when
compared next to each other. Structure of sand captured on the British beach is going to differ to
Californian one or Australian one. The same way there is no ultimate green color for every leaf, as
each is environment, age, and life experience dependent. There is no stone color as each differs on
micro level and was formed in a different way from different substances. There is even no ultimate
color for human skin as some of us are pale, some tanned, some black, some white, some hairier and
some bolder, color of our skin even differs depending on which part of the body we take into
consideration. But even though, each thing we see differs, it still has its own color which fits into some
defined color range.

And since color is just the frequency of the light wave, and even that it can be measured very
accurately, in practice, when we deal with nature, we still need to simplify and average measurements
to take into account all color components any natural substance is usually made off. And even uniform
cracked earth surface
or surface of plain concrete

can give us different readings if measured in different spots as nature is way more complex to what
we perceive. It doesn’t mean we cannot measure the color of ground or concrete. It just means that
the color we measure should be considered as a reference only and any value in 10% color or
luminance range from should be also perceived as correct.

In practice all these measurements should be treated rather as guideline then ultimate truth, but even
if not 100% accurate, they can still be very helpful and help us to keep overall color consistency of all
materials we capture.

To make my life easier and support X-Rite color checker base color calibration I measured some key
substances and grouped results in form of special reference list.
Img. 47. Color measurement with color spectrometer

This way I build a PBR Color Reference List, which can be used to validate albedos for my materials.
Even if material I want to validate doesn’t exist on this list, I can use something similar to it and use it
as a reference.

Linear vs Gamma color space


A color space is a specific organization of colors. Any data we capture with our camera is being stored
within an available color space, although the pixel distribution within that space can be
mathematically and physically correct or can be adjusted to look properly when presented on our
digital screens.

There are two main types of color space. The linear color space and the gamma color space.

Historically it comes from the digital screen limitations, which in practice struggles to present color
information in bottom dark spectrum getting more responsive in bright part. It means that we can’t
see any significant difference between RGB 0 and RGB 16 but the brighter the screen goes, the more
noticeable the difference is.
This is where gamma color space comes in handy. The gamma color space, shifts all values up so the
image presented on any digital screen looks more natural and closer to what we expect. While for the
linear space, the middle point, called a neutral grey is in 128 which is in the middle of the spectrum
between 0 and 128, for gamma space, neutral grey is at 187 (to be accurate it is really at
187.516030678, so it's a tiny bit closer to 188 then 187)

Usually unless it is known that another color space will be used to process and display the color
content, we should always evaluate using sRGB color space.

The color space is just a tool which makes images look natural when presented on the screen, and
every single camera by default captures images in gamma space. The most common gamma spaces
are called the sRGB and Adobe RGB. Both are pretty similar but the Adobe one covers more
information about green color while sRGB stays with reds and blues.

With the gamma space color shift, we don’t fully utilise available color range for data storage. This is
why for more physical values we use linear space, which is evenly spread across full dynamic range.

In PBR workflows, linear space is useful for anything which isn’t color related. We use a linear space
to store height information, specular data, roughness, normal angle, thickness.
It’s good to know that the gamma space for digital screens differs from 1.7 to 2.3 as it depends on the
actual screen quality and age. Old worn tv screens has gamma around 1.7. Most decent computer
screens have gamma around 2.0-2.2 while top quality IPS screens have gamma space of 2.3.

We usually assume 2.2 as a standard gamma space, and this is why I used this value to convert all my
measurements from linear into 2.2 gamma space.

To turn each color channel measurement into 2.2 gamma space and limit them to 0-255 RGB space I
used these equations:

RED = POWER(MeasuredRED/1023, 1/2.2)*255

GREEN = POWER(MeasuredGreen/1023, 1/2.2)*255

BLUE = POWER(MeasuredBlue/1023, 1/2.2)*255

Based on these values I calculated color values in 0-255 range in linear space:

Linear Red = POWER(RED/255, 2.2)

Linear Green =POWER(GREEN/255, 2.2)

Linear Blue =POWER(BLUE/255, 2.2)

With another equation I calculated the relative luminance in sRGB space:

The Relative Luminance = 0.2126*(R/255)+0.7152*(G/255)+0.0722*(B/255)

It is worth to remind that relative luminance isn’t just grey-scaled color as human eye
perceives brightness of different colors in a different way. So bright green looks much darker
to bright red while red looks darker to same bright blue.
Img. 48. Different objects of different color after they were grey-scaled (desaturated)

Since we know the difference, we can take these values into consideration and calculate the
overall luminance level for human eye, and this is what I did.

Color affects our brightness perception

The linear luminance has to be treated exactly the same way, but with linear values used for
calculation.

Linear Luminance = 0.2126*(linearRED)+0.7152*(linearGREEN)+0.0722*(linearBLUE)

Since many applications also utilise a hexagonal color format, I decided that it might be pretty
handy to have color values also represented in sRGB HEX.

Next, I applied these equations into excel table and use them to present values of different substances
in form useful in albedo referencing.

Current real-time rendering engines have limitations which put a lot of performance constrains into
visual calculations. To take some heat of the rendering engines, we usually simplify any results to the
state they look good enough and if well made, it does the job pretty well. This is why we should always
favour believability over absolute correctness, especially if it brings us closer to our quality target.

And this is where the PBR Color Reference List comes in handy. This list can be really very useful as a
guide to keep albedo consistency in place but also to verify if our calibration or color interpretation is
correct and are our materials within PBR range and I use it all the time since years.

In practice, to make sure our materials are consistent and correct, and doesn’t go into any crazy values
which would get us into visual trouble, we can use a list of already measured values for different
substance types and by comparison verify our materials and tweak their albedos if needed. And this
is where PBR Color Reference List comes in handy.

PBR Color Reference List use in practice


There are two approaches for PBR Color Reference comparison we might apply: simple visual side by
side comparison and comparison of actual color values. For the first one we need to compare visual
look from the captured sample with the values from the Reference List and make sure they more or
less aligned while for value comparison we need to compare numbers with actual pixel distribution
presented on histogram and numbers measured with color picker.

For color comparison I use these values from the PBR Color Reference List:

Which exactly depends on the case and the application I use. Sometimes its enough to color pick the
value from the ‘Color Clip’, sometimes I just copy color into the app I use as HEX value, and sometimes
I compare RGB values.

Since as we know that nothing in nature looks the same everywhere, we should consider that any 10%
deviation up and down from the reference values is totally fine and unless we don’t want to bring
albedo to the referenced value intentionally, we can keep it without any worries as long its in 20%
band.

If the surface we compare isn’t solid and unified enough and we struggle to interpret its values and
look, we might temporarily blur it and use its blurred version for comparison.

Since color affects our perception of brightness, if we struggle with luminance comparison, temporary
desaturation might help. It’s really much easier to compare brightness levels without actual color.
The second approach is obviously more accurate and reliable as it is based on numbers, but since in
99% of cases color is already set during previous stages, I use it mostly just for luminance validation
and tweaks. For this approach I analyse histogram and overall pixel distribution taking and reshifting
peaks on it until numbers matches. Process details differ depending on the application used at this
stage but basically, I tweak final albedo by applying any useful adjustments to bring it down to
designated levels. These are usually LEVELS, CURVES, HUE and BRIGHTNESS.

Img. 49. Affinity Photo used for Diffuse texture tweaking to bring it down to target levels

This is the stage when good understanding of histogram really comes in handy
Img. 50. Levels layer used to reshift luminance peaks distribution with histogram live preview in Photoshop

As a target I take the reference luminance value from the ‘Relative Luminance’ column from the PBR
Color Reference List:

And basically that’s really it. With the material ready I test it Marmoset Toolbag scene on a plane since
it has a lot of properly made HDRI maps, but it can be done with any real-time PBR rendering system.

We just need to make sure that the HDRI map loaded is the only source of light in the scene and simply
compare how our material look when compared to this background HDRI map.

If it is too bright, it usually means that albedo is too bright. If it is too dark, albedo is too dark and
needs some tweaking. If material looks natural and fits well, it usually means that we did a good job
and it will look consistent to any other materials which were also properly made.

I put below a simplified, illustrative version of PBR Color Reference List I made and was referring to.
The full PDF and Editable versions are available as separate files to download.

I also made this list in its thumbnailed version available on my youtube channel in form of a video, so
everyone can get access to it easily in no time.
Say No to Piracy
Thanks for reading and I hope you found this document useful. I am not a corporation or a business
but an Artist and I spent over 2 years on this R&D.
I did it because I love what I do and I am happy to
help and share my knowledge with those who
might need it. I invested a lot of effort, time and
money in creating this guide and I appreciate that
you (or anyone else) do not pirate or torrent it as
it directly hurts me. If you somehow have an
illegal copy, please buy a legal version as this is
the only way to support what I am doing:

https://gbaran.gumroad.com/l/rjthlk

Thanks a lot for your support if you bought one


already, because it really helps to create and
share more content like this one. Please also bear
in mind that every single purchase helps me
directly to create more content I share for free on my youtube channel:

https://www.youtube.com/c/GrzegorzBaranArt

Sincerely,

Grzegorz Baran

You might also like