Professional Documents
Culture Documents
to Walter Tufró
Who taught me that you have to pursue your dreams in life.
2D Shader Development: Illumination &
Shadows
Copyright © 2018 Francisco Tufro
First of all, Don’t Panic. It’s normal to get stuck while learning
something new, and I’m here to help you. The first thing I’d
suggest is that you join us on our Hidden People Club Discord
server https://discord.gg/776BVVD if you haven’t already. I am
using it to have organized discussions about the book and its
content.
Series Overview
1. Foundations
In this book, we’ll dive into how we can modify and mix
existing textures to create amazing effects or animations inside
our shaders. This will give you the tools to be able to implement
things that were unthinkable before! You’ll be able to create
some awesome animations seen in After Effects which would
otherwise be imposible, using several techniques including sine
waves, smoothsteps, color offsetting/chromatic aberration and
more.
4. Full-Screen Effects
All the source code for the exercises can be found on GitHub,
with MIT License (so you can actually use it in your project,
except for the assets).
If you are familiar with git, you can clone the repository as
usual. If you don’t know anything about git or don’t want to
install it, you can download a zip file containing all the files from
https://github.com/hiddenpeopleclub/2dshaders-book-
illumination-exercises/archive/master.zip.
Introduction to 2D Illumination
Why illumination?
These are just a few examples of the many uses and reasons
why you would want illumination in your game. It’s a good idea
to understand these techniques and make use of them when
the need arises.
Just think about it, if we have a certain color in our sprite, say
(0.5,0,0,1) (half-red). and we add another color on top, say (0.2,
0.5, 0.3, 1) , it will always be closer to white, in this case (0.7, 0.5,
0.3, 1) . That’s, basically, illuminating it, because we’re always
increasing the lightness of a color when we add another one on
top.
Let’s analyze what happens numerically. If we add black
(0,0,0,1), we do nothing, since zero is the neutral element of the
add operation. If we add white (1,1,1,1) we end up hitting the
maximum possible color in all three channels, which makes
sense, since white is as bright as one can go.
All the values that you can get by adding colors that are not
black and white generate a colored illumination effect on top of
your sprite.
Let’s see some examples. Download the example assets for the
book series and look for the assets related to this chapter. Grab
AdditiveLights.png and load it in your Unity project.
Then, we’ll set up a material with an Additive shader and use
this texture as the _MainTex property (all these things were
covered in the Foundations book, if you don’t remember how to
do this, go to the last section in this chapter).
Particles
And the last case is when we multiply by one. You can imagine
what happens, right? We do nothing. The color remains as it was
before.
One interesting technique that is a consequence of these
properties of multiplication and their impact on colors is
creating grayscale images that create shadow masks.
The solution to the Multiply part was adding the following line
after the SubShader command:
Exercise 1
In this exercise, you’re going to experiment a bit with lights
and shadows. Open up Exercise 1 - Lights and Shadows .
Conclusion
Hope you enjoyed it and prepare for the next chapters, where
we’ll analyze other amazing techniques!
Kinda Dynamic Local Ambient Lights
In this chapter, we’re going to create a simple but effective
system to simulate ambient lights in our characters and objects.
We’ll also see how to enable the use of more than one light
interpolating them when the character changes its position.
Ambient
Let’s begin creating a new script in a folder called
Scripts/Lights . We’ll use the namespace Lights to avoid name
collisions.
The only two values we want for our ambient lights are
LightColor and ShadowColor . These will be used by the
TextureWithAmbientLight shader in the next section to colorize the
character.
using UnityEngine;
namespace Lights
{
public class Ambient : MonoBehaviour
{
public Color LightColor;
public Color ShadowColor;
}
}
Directional
namespace Lights
{
public class Directional : MonoBehaviour
{
[Range(0,360)]
public float Angle;
public Color LightColor;
public Color ShadowColor;
}
}
Point
The only value (besides the colors) that we want to expose for
Point lights is the Range . We use this to set the radius of the
CircleCollider used for this light.
namespace Lights
{
public class Point : MonoBehaviour
{
public Color LightColor;
public Color ShadowColor;
[Header("Range (In Units)")]
[Range(0,100)]
public float Range;
Now that we’re all set with the scripts, we’ll create three
prefabs. We want our Ambient and Directional lights to have a
BoxCollider2D each, you’ll define the area of influence by
modifying the collider size. And the Point Light should have a
CircleCollider2D , as seen before, the area of influence is handled
by the Range value. The three colliders should have Is Trigger set
to true because we don’t want them to physically collide with
our objects.
Once you have these GameObjects set, drag them to the
project panel to create prefabs for each one of them. Once we
have the code in place, you’ll be able to just drag and drop the
lights in your scene and they should work automatically.
Object coloring
Properties
{
_MainTex ( "Main Texture", 2D ) = "white" {}
_LightColor ( "Light Color", Color ) = (1,1,1,1)
_ShadowColor ( "Shadow Color", Color ) = (0,0,0,1)
}
sampler2D _MainTex;
fixed4 _LightColor;
fixed4 _ShadowColor;
The vertex shader is the default for the typical Texture shader
created in the Foundations book, but we need some changes in
the fragment shader.
Pass
{
Cull Off
SubShader
{
Blend SrcAlpha OneMinusSrcAlpha
Pass
{
Cull Off
ZWrite Off
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
struct appdata
{
float4 position : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
};
sampler2D _MainTex;
float4 _LightColor;
float4 _ShadowColor;
float _Threshold;
float _Smoothness;
float _Angle;
This is the base of all the work we’ll do in this chapter, so let’s
see how we can make this more sophisticated.
Ambient Lights
Let’s now get that material during our object’s Start callback:
So, now that we have our material, we’ll change the sprite’s
color when we encounter a light. We do this by defining Unity’s
2D trigger callback OnTriggerEnter2D .
if(ambientLight != null)
{
LightColor = ambientLight.LightColor;
ShadowColor = ambientLight.ShadowColor;
}
}
And that should be it. Remember, for this to work, your sprite
needs to have a material with the TextureWithAmbientLight shader
applied to it.
If we place two lights and move the character between them,
the coloring should change. The colliders you see in the
following image are each light’s colliders.
Combining this with static illumination in the whole area
already gets you miles away from most games that won’t do any
processing in their sprites’ illumination.
You may notice I’m updating the color every frame, and yes,
there is no need for that, if you were going to stop adding
features here, you could pretty much set the colors inside
OnTriggerEnter2D instead of storing the color and then changing it
during Update . But we’re not stopping here, hell no! Just wait for
it…
Controlling the gradient shape with _Threshold and _Smoothness
There is a small tweak to the shader that you can do that will
give you some control over the gradient. When designing the
gradient, you have two parameters that you can change, and
those are where the gradient starts and when it ends. We can
define that using two properties: _Threshold (where it starts) and
_Smoothness (the length).
float4 _LightColor;
float4 _ShadowColor;
float _Threshold;
float _Smoothness;
If you pay attention you’ll see that some of the work is already
done for you. The angle gets passed in radians to the shader.
This is done by the following line in Update
For the Point lights, we want to create a type of light that has a
given range (in our case we’ll use a circle collider to define it) and
a position. We’ll modify the smoothness and direction of the
gradient using the point light position as a reference.
With that, we have all the data we need from the game. Now
let’s see how we use it in the Update function.
spriteRenderer.material = material;
}
}
You could get more sophisticated with this, moving also the
threshold or stuff like that, but I guess this is enough to illustrate
the principle on how these lights work.
Color previousLightColor;
Color previousShadowColor;
float lightTransitionTimer = 0;
if(lightTransitionTimer < 0)
lightTransitionTimer = 0;
You can now duplicate a light, change its values and move it.
Then hit play and move the character around. You’ll see how the
color is interpolated when you move between the lights’
colliders.
Conclusion
Scene Setup
Since our goal is to render the alpha channel of some objects
in the scene (and not each and every one of them) we’ll need to
do a selective rendering of those objects. This can be achieved
by limiting what Layers the camera renders.
Let’s select our character and click on Layer, then Add Layer.
With this last step, you effectively prevent every object that is
not in the RimLights Layer from rendering on this camera. You
can see this by hitting play or checking the camera preview.
But in Unity, we don’t have to deal with that, since it’s already
abstracted using the RenderTexture class. If you’re not using Unity
you may need to do some work to get this working. If you’re
working with OpenGL search for glFramebufferTexture and learn
how to use it.
You can review the effect of the script by hitting play now.
Click on the camera, look for Alpha in the inspector and double-
click it.
Now, we want to create a new shader that will only render the
alpha channel. I’d expect you already know how to do this if
you’re lost on how to create such a shader, go back to 2D Shader
Development Book 1: Foundations and review the Texture shader.
camera.SetReplacementShader(TextureWithAlphaOnly, null);
Calculating Normals
The first step then is to blur the image. For this, we’re going to
convolute the image with a Gaussian Kernel. Refer to the
Convolution Appendix to help you understand this process if
you’re not familiar with convolution yet. Convolution is also
deeply analyzed in the Procedural Texture Manipulation Book of
the series.
Let’s duplicate the Texture shader now and call it Blur . Remove
the code for the fragment shader.
We’ll do the second option since it’s the most common. For
this, we’ll use the Pass command from ShaderLab twice,
duplicating the shader code, once for vertical and one for
horizontal. Take a look at the finished shader below to see this.
return sum;
}
For the horizontal pass, you only need to change the position
of the offset, and use MainTexTexelSize.x .
fixed4 frag (v2f i) : SV_Target
{
#define sample_and_weight(weight,offset) tex2D(
_MainTex,
i.uv + float2(MainTexTexelSize.x * offset, 0)
) * weight;
return sum;
}
We’ll now add some logic to apply the Blur shader to the
whole screen. The specifics of how to work with full-screen
effects are covered in the Full-Screen Effects book, but we’ll
discuss the bare minimum here so that you understand the
technique.
Since we’re not copying the source texture into the destination
texture we broke the expected flow of the data and the
framebuffer ends up empty, thus we see a back screen.
Now let’s hit play again, and the image should be back.
You may now be asking yourself: what the heck does this have
to do with blurring? Good question. Blit has more than one set
of parameters that we can pass to it. One of those sets includes a
material, and this is where everything changes.
RenderTexture tmp;
If you’re familiar with the Blur effect, you’ll notice that, when
applying it in an image editing software, you have a variable
called Blur Passes or something like that. This is the number of
times we apply the blur processing.
[Range(1,5)]
public int BlurPasses = 3;
RenderTexture tmp;
RenderTexture tmp2;
Graphics.Blit(tmp, destination);
}
You see now that we’re blitting the source image into the tmp
and then apply several blur passes. Finally we blit the blurred
image into the destination buffer.
Well… This is an indirect level up for you. Now you know how to
blur an image! Congrats.
If you’re not familiar with Normal Mapping and you still didn’t
read the appendix on the topic, it could be a good time to do it
since I explain normal mapping in depth in that appendix. I’ll just
say the bare minimum here.
float x1 = tex2D(_MainTex,
i.uv + float2(MainTexTexelSize.x, 0)).r;
float y0 = tex2D(_MainTex,
i.uv - float2(0, MainTexTexelSize.y)).r;
float y1 = tex2D(_MainTex,
i.uv + float2(0, MainTexTexelSize.y)).r;
float dx = x0 - x1;
float dy = y0 - y1;
Since this image is grayscale, r , g and b are the same for any
given texel. Because of that, you can only get the r channel in
these samplings.
Now, let’s test this thing by running our Blit function through
it. Create a new material called ToNormal and assign the ToNormal
shader to it.
Exercise 3: Lighting
I did most of the work for you in terms of passing the data to
the shader. Check the setup for the Pass.Lights pass.
CurrentPass = Pass.Lights;
camera.SetReplacementShader(RimLights, null);
Shader.SetGlobalVector("_LightPosition", tmpVector);
Shader.SetGlobalVector("_LightColor", PointLight.LightColor);
Shader.SetGlobalTexture("_Normals", Normals);
camera.targetTexture = null;
For the intensity of the light you want to use the following
formula:
NdotL = saturate(dot(normalize(l), normalize(n)))
When you have that working, I’ll ask you to create a new
shader that will add the light to the rendered scene. This can be
done in several ways, I’ll just show one of them in the exercise
solution.
This exercise may take some time, since it has quite a few
concepts in it, don’t feel bad taking the time needed to fully
understand what’s going on, and be sure to drop a line on
Discord if you’re totally lost!
Conclusion
We’re also going to get rid of our custom lights and make use
of the existing Unity lights, this will enable our sprites to be
suitable to be used in a 3d context as well and use other Unity
features.
Asset Requirement
Math derivation
Let’s choose one image to do our analysis, the one that has a
light coming from the top.
Now we also know we have a light coming from the top, so our
light is defined as:
But, I is what we have stored in the texture with the top light!
Then, when n_y is positive, we can use the result of the top
image, and when n_y is negative we can use the image from the
bottom image. Because of this we can conclude that
The same derivation can be done for right and left to conclude
that
Then,
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Top ("Top Light", 2D) = "white" {}
_Right ("Right Light", 2D) = "white" {}
_Bottom ("Bottom Light", 2D) = "white" {}
_Left ("Left Light", 2D) = "white" {}
}
Then, we’ll add the required variables inside the Cg code.
sampler2D _MainTex;
sampler2D _Top;
sampler2D _Right;
sampler2D _Bottom;
sampler2D _Left;
You’ll have to name the normal map with adding the direction
after a dash at the end, for example, character-top , character-bottom ,
character-left and character-right .
Once you have such images in Unity, select the four of them,
and then open the menu called 2D Shaders > Create Normals .
Go ahead and open Exercise 4: Normal Map and start playing with
it. The code you have to write should go in Editor/CreateNormal.cs .
Be sure to ask on Discord if you feel lost.
Up until now, we’ve been using our own lights, but now that
we have a regular normal map texture we can make use of
Unity’s lighting system.
This is awesome, now you can use any type of light provided
by Unity to lit your character or other objects in the scene.
I think this is the ultimate normal mapping technique and the
one that yields the best results. Even if it requires more work
from the artists, the amount of control over the effect and how
cool it looks outweighs its cost in my opinion.
Conclusion
There are other ways to generate normal maps, but I’ve found
that this is the most intuitive since artists are already used to
thinking about light sources, and their influence in their
illustrations. Thus, no artist would find challenging to create
such images, even if it’s a good amount of work, the visual
results are incredible.
Where to go now?
Congratulations! You now have a solid foundation that will
enable you to go on a much deeper learning career. Let’s figure
out what you can do next.
In this book, you’ll learn a few techniques that are used a lot in
computer graphics to manipulate our textures with code. You’ll
go from a simple sine wave movement to complex
combinations of textures animating other textures and crazy
stuff like that.
You’ll also learn about noise, you’ll use Perlin Noise to animate
sprites and create random-esque noise inside a shader.
Full-Screen Effects
The internet
Reach out other developers that have done things you are
excited about and ask them how they did it. This could be a
major source of learning material!
Books
Last but not least, all the amazing developers that helped
review the book in its early stages: ….
Thank you so much to all of you, I’m eternally grateful for your
time investment in making this book better!
Credits
The amazing cover design and Hidden People Club logo were
created by German Sanchez from Bigfoot Gaming. I can’t be
more grateful for having you on board, man.
Last but not least, all the amazing developers that helped
review the book in its early stages: Jacob Salverda
(http://www.salvadorastudios.com), Mauricio J. Perez
(http://www.randomgames.com.ar)
Thank you so much to all of you, I’m eternally grateful for your
time investment in making this book better! ## Credits
The amazing cover design and Hidden People Club logo were
created by German Sanchez from Bigfoot Gaming. I can’t be
more grateful for having you on board, man.
Lights
And now, we have lights. Hit play and see the character
getting illuminated when it moves behind the lights. The next
step is to make this light apply a color and intensity to this light.
Let’s create a Color and Intensity properties in the shader.
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Color ("Color", Color) = (0,0,0,1)
_Intensity ("Intensity", Range(0,1)) = 0.5
}
Now, let’s add the properties for this in the Cg code.
fixed4 _Color;
float _Intensity;
Now, in order to try this, we’ll have to set the colors in the
corresponding materials. The WhiteLight should have a white
color, and the GreenLight should have a green color. Go ahead and
set those.
When you do that, you’ll see that the green light turns green.
Yay!
Now, if you hit play, you’ll see the character being illuminated
by the lights. Awesome.
Go ahead, hit play and change the intensity factor to see how
the light is affected by it.
Shadows
Now it’s time to get the shadows working too. For the
shadows to work we need to use a multiply blending mode in
the Shadow shader.
Now as you can see, the shadows will darken the character
when it is behind, but they don’t look right.
The reason is that these images have a gradient that goes
from black in the edges to white in the middle. So the first step
is to make those images look as they’re supposed to. For this,
we’re going to invert it.
Now we want to set the color and intensity. So let’s add the
necessary shader properties again
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Color ("Color", Color) = (0,0,0,1)
_Intensity ("Intensity", Range(0,1)) = 0.5
}
And let’s add the required variables in Cg.
fixed4 _Color;
float _Intensity;
While that works fine in the previous case, look what you get
when making BlueShadow use a blue color in the material.
Now, if you have the colors set correctly in both materials (blue
with blue, and black with black), you should see the shadows
correctly.
And that’s it, we have our shadows with coloring and intensity
ready.
Conclusion
I’d love to see the stuff you come up with, please share
screenshots of your game using these techniques in the Discord
channel.
Exercise 2 Solution
Now, we’re going to add some more dynamism to our lighting
system with Directional Lights. The idea for these lights is that
they include an angle that is passed to the shader to that the
gradient gets rotated.
if(directionalLight != null)
{
LightColor = directionalLight.LightColor;
ShadowColor = directionalLight.ShadowColor;
Angle = directionalLight.Angle;
}
}
That’s it for the script. Now create the required material and a
shader called TextureWithDirectionalLight .
2D Vector Rotation
struct v2f
{
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
float2 direction : TEXCOORD1;
};
float2x2 rot = {
cos(_Angle), -sin(_Angle),
sin(_Angle), cos(_Angle)
};
o.direction = mul(rot, v.uv);
return o;
}
That’s it, now we can set an angle and change the direction in
which the light hits the character.
Exercise 3 Solution
We’re now in the last step of this technique, the actual
lighting. We’ll use the Lambertian diffuse light reflection model
in this example. If you’re not familiar with this (which I assume
most of the readers won’t!), I added the appendix Lambertian
Diffuse Shading that discusses how diffuse lighting is usually
modeled in games, along with several references to read if you
want to learn more about that topic.
First of all, we’ll need to pass three variables to the shader, the
Normal Map, and the attributes for the light: its color and
position.
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Normals ("Texture", 2D) = "white" {}
_LightColor("LightColor", Color) = (0,0,0,0)
_LightPosition("LightPosition", Vector) = (0,0,0,0)
}
sampler2D _MainTex;
float4 MainTexTexelSize;
sampler2D _Normals;
float3 _LightPosition;
float4 _LightColor;
For the lighting calculations, we’re going to need to pass the
world position of the vertex to the fragment shader and
interpolate it, so we’re going to add that value to v2f . If you don’t
do this and use position instead you’ll be using the clip-space
representation of the vertex, and your distance calculations
won’t be correct. Refer to the Coordinate Systems appending to
see why.
struct v2f
{
float2 uv : TEXCOORD0;
float4 position : SV_POSITION;
float3 worldPos: TEXCOORD1;
};
We’ll use the TEXCOORD1 semantic for this, don’t overthink it, we
need to give a semantic so that it can be passed to the fragment
shader, and since we’re not using TEXCOORD1 (a semantic used to
pass a second set of uvs) we’ll just use that.
The frag method is where the magic happens, let’s take a look.
This is it for the shader, the result of this shader is a buffer that
has black where no shapes are present and colored lights in the
shape’s borders. We’re going to add this image to the rendered
screen next, so that we effectively have amazing lights, but first
we need to make some changes to our C# script.
You also want to be sure that you have all the necessary
variables that we’ll use in this script
[Header("Normal Generation")]
public Shader TextureWithAlphaOnly;
public Material Blur;
[Range(1,5)]
public int BlurPasses = 3;
public Material ToNormal;
[Header("Rim Lights")]
public Shader RimLights;
public Lights.Point PointLight;
[Header("Blending")]
public Material Additive;
new Camera camera;
public RenderTexture Normals;
public RenderTexture Lights;
RenderTexture tmp;
RenderTexture tmp2;
Vector4 tmpVector = Vector4.zero;
Pass CurrentPass;
Again, yes, this could be done in fewer steps, but for the sake
of clarity, I decided to do this in clear steps. Go ahead and try to
optimize this if you’d like to. Jump into the forums to show the
results of your experiments, I’d love to see what you come up
with!
Well, that was quite a journey, if you hit play now, and get your
character close to a light, you’ll see how the rim lights get
created.
If you try running the script right from the start you’ll get an
error.
using UnityEditor;
This will create a menu in the Unity Editor that will call the
CreateNormals method.
return;
}
For each (x,y) pair, we grab the corresponding pixel from each
of the four textures and then create the normal vector as we did
in the shader.
I’m not going to say much about this, I’m just calculating the
path where the images were selected (using the top image
path) and then saving the normal to that path.
The naming of the normals file asumes that you only use
dashes for the direction name, if you use dashes in the name of
the file it will break (name files “my_great_asset-right.png”
instead of “my-great-asset-right.png”).
Rotation matrix in Cg
float2x2 rot = {
cos(_Angle), -sin(_Angle),
sin(_Angle), cos(_Angle)
};
Appending II: Normal Mapping
In this appendix, we’ll discuss what a normal vector is in the
context of computer graphics and why we need them.
But we navigate our games not using the game world itself,
but cameras. In the context of computer graphics, is not that
easy to render scenes by moving a camera around the world,
instead, we move the whole world so that the camera sits in
(0,0,0) . We achieve this by moving the vertices to another space
called View Space.
If my camera sits in the world’s (0, 0, -10) , then the top left
vertex of the quad becomes (-1, 2, 10) in View Space.
o.position = UnityObjectToClipPos(v.position);
You can search the web to find how these matrices are
constructed or look into the book’s website “Where to go now”
section to find out more about this topic.
Appendix IV: Lambertian Diffuse Shading
There is a lot of information around about this topic, it’s just a
matter of doing a google search. So I’ll just explain the intuition
without all the formal math, You can find a detailed explanation
of this topic here: https://www.scratchapixel.com/lessons/3d-
basic-rendering/introduction-to-shading/diffuse-lambertian-
shading.
If you analyze this numerically, you’ll notice that when the light
and the normal are perpendicular, we get that cos(90) = 0 , this
means that when the light and the normal are perpendicular
(thus the light and the surface are parallel) the surface does not
reflect any light. Which makes sense, right?
The other extreme is when the light is perpendicular to the
surface, pointing straight to it. In this case, the angle between
the normal and the light is 0 , and cos(0) = 1 . This means that
when the light is pointing at the surface, all beams are fully
reflected.
Whatever angle between 0 and 90 will give values in between.
We can normalize the vector that points to our light and our
normal vector so that they have norm 1 . Then, what we end up
by calculating the dot product of these two is the cosine of the
angle.
Then, by mixing both formulas, we got that
I = saturate(dot(normalize(l), normalize(n)))
You may not immediately see what the heck this means in
terms of what we’re using it for, but bear with me.
You can now see that we can use the pixel values of an image
as part of the convolution equation. Then we use a convolution
kernel, defined as a matrix of values, that also encodes the
results of a function at given points.
There are a lot of effects that you can do with this, from edge
detection, low/high pass filters, etc.