You are on page 1of 136

Dedicated

to Walter Tufró
Who taught me that you have to pursue your dreams in life.
2D Shader Development: Illumination &
Shadows
Copyright © 2018 Francisco Tufro

No parts of this publication may be reproduced, stored in a


retrieval system, or transmitted in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise,
without the prior written permission of the copyright owner.

Published by Hidden People Club


Table of Contents
Introduction to the series
Motivation for the series
I use Unity, why should I bother learning shader
programming at all?
Who are these books for?
I need help! What can I do?
Series Overview
1. Foundations
2. Illumination & Shadows
3. Procedural Texture Manipulation
4. Full-Screen Effects
Downloading the source code for the exercises
Introduction to 2D Illumination
Why illumination?
Static Illumination
Using Additive blend to light things up
Particles
Refreshing how to implement Additive
Using Multiply blend to create shadows
Refreshing how to implement Multiply
Limits of static illumination
Exercise 1
Conclusion
Kinda Dynamic Local Ambient Lights
Creating our own 2D Light Sources
Ambient
Directional
Point
Creating the Lights prefabs
Object coloring
A note on Culling
Full code for the shader
Ambient Lights
Using the TextureWithAmbientLight shader in our
character
Controlling the gradient shape with _Threshold and
_Smoothness
Exercise 2: Directional Lights
Point Lights
Using the TextureWithPointLight shader in our character
Adding Point light logic
Interpolating between lights
Conclusion
ScreenSpace Automatic Rim Lights
Rendering Alpha Only
Scene Setup
Rendering to a RenderTexture
Rendering only the Alpha channel
Calculating Normals
Blurring the image
Generating the Normal map
Exercise 3: Lighting
Conclusion
Dynamic Lights using Normal Mapping
Asset Requirement
Math derivation
Generating the Normal Map in a Shader
Exercise 2: Normal Map Generation
Using Unity’s native lights
Conclusion
Where to go now?
Continue with the other books in the series
Procedural Texture Manipulation
Full-Screen Effects
The internet
Books
Acknowledgements
Credits
Acknowledgements
Exercise 1 Solution
Lights
Shadows
Conclusion
Exercise 2 Solution
Using the TextureWithDirectionalLight shader in our
character
Adding an angle to the shader
2D Vector Rotation
Exercise 3 Solution
The RimLights Shader
Final touches to ScreenSpaceRimLights.cs
Exercise 4 Solution
Appendix I: Vector Rotation
Rotating with the origin as the pivot
Rotating with an arbitrary pivot
Rotation matrix in Cg
Appending II: Normal Mapping
Appendix III: Coordinate Systems
Transforming a vertex between spaces
Appendix IV: Lambertian Diffuse Shading
Appendix V: Convolution
Introduction to the series

Motivation for the series

During several years of developing my own games as a solo


dev, then with Nastycloud, and now with Hidden People Club, I
found there were little-to-no sources of organized information
about how to use the power of shader programming specifically
in the context of 2D games.

Every single shader course or book out there talks about 3D


lighting, 3D texturing, shadow, and light mapping, etc. But none
of them provided a good section on 2D. I get it though, 2D is
kind of a subset of 3D when we talk about computer graphics.

Also, in general, computer graphics books are targeted to


engine creators, which usually work in 3D. From giving
workshops about this topic in Argentina and the United States I
found out that there are a lot of people who are not ready for the
3D math behind computer graphics, but can still benefit from
learning a leaner version of the topic specifically designed for 2D
development.

The techniques I describe in the book are the consequence of


my own experience in the topic and taken straight from the
trenches. So I thought it would be a good idea to sit down and
organize all the information I’ve been collecting and figuring out
during the last 4 years and share it with you all.

As I mentioned, the content in the book series has already


been taught in several workshops during 2014-2017. I’ve
updated, expanded, sorted, and enhanced it during these 3
years, and I plan to continue doing it in the foreseeable future.

I use Unity, why should I bother learning shader


programming at all?

The video game industry is reaching a point where you need


something new to stand out. We can’t just develop whatever,
launch it and expect it to make money for us. Unless you’re
doing a “Whatever” Simulator, those seem to work for some
reason. But for the rest of us trying to stand out in a crowded
space, we need to create games that play and look uniquely - at
least to some degree. Shader programming is one of the most
important areas of game development. It binds visual art and
technology. It makes both worlds make sense of each other.
Visual artists may have great ideas in mind and they have the
means to create fantastic looking worlds, but none of them will
run at 60fps unless a programmer that understands shaders is in
the mix. Not only that, if you know how to program shaders, you
can aid the visual arts team in deciding which things make
sense and which don’t. There are many things that are way
easier to achieve using a simple shader than having animators
do them. Combining several techniques you can achieve great
results without much processing or memory effort, which is key
to having high performance in games.

Using Unity is great, but if you limit yourself using stock


shaders (the ones that come with Unity), you’ll be missing a
huge opportunity to make your games look unique and perform
at a good framerate.

As an example, in “Nubarron: the adventure of an unlucky


gnome” we used Spine as the main software for skeletal
animations. But Spine runtime needs can be a nightmare for
processors and memory. We couldn’t have more than 20/30
animating objects on-screen at the same time. We usually
surpass that by quite a lot, especially in the background foliage
layers where every single asset is moving. So, instead of
animating the foliage using Spine, or a sprite sheet (which
would also consume too much memory), we created a generic
shader to create a ‘wave’ that moves along the asset from one
side to the other, as if responding to wind changes. That was a
great choice. It works really well visually and does not consume
any CPU time.

I created this book using Unity 2017.3.0f3, which may not be


the latest one when you read it, so some things may have
changed slightly! Please get in touch if something breaks and I’ll
attempt to upload a fix in the Github’s code or the series
website.

Who are these books for?

These books are mainly for those pragmatic programmers


that want information a little bit more digestable than regular
textbooks. It may also be beneficial for artists and producers, as
It can give them an introduction to what’s feasible, and what
kind of techniques can be used to achieve that effect they want.

If you have never done any computer graphics programming,


this book series can be a great way to dive into the topic. I’ll
ignore most of the linear algebra needed to understand 3D
transformations and such, as there are several resources that
cover the topic. I also get nervous when I see computer graphics
books starting with one or more ‘Linear Algebra recap’ chapters.
If you lean more towards the theory, I’m afraid this book may not
be for you. While there is some theory in the books, I try to keep
it to the minimum amount in order to make the practical
discussion more understandable. The book tries to aim the
pragmatic programmer that wants information a little bit more
digested than regular textbooks.

I need help! What can I do?

First of all, Don’t Panic. It’s normal to get stuck while learning
something new, and I’m here to help you. The first thing I’d
suggest is that you join us on our Hidden People Club Discord
server https://discord.gg/776BVVD if you haven’t already. I am
using it to have organized discussions about the book and its
content.

Once you’re in be sure to use the right channel to send your


questions ( #2dshader-development ). I’ll be monitoring the
channel to help you on your path to learning these materials.

I also encourage you to share everything you create in the


server too, I’m always delighted to see the creations made with
my teachings as a starting point.

Be sure to also follow me on @franciscotufro and ping me if


you need anything, my DMs are open.

Series Overview

I decided to cover several topics that I think are of special


interest when starting working on 2D. These topics were useful
to me when working on games in the past, and I consider them
part of my everyday developer toolkit.

1. Foundations

In this book you’ll get an introduction to shaders, explaining


what the GPU is and what role the shaders play in it.

After understanding what a shader is, we’ll dive into how to


apply and use shaders in Unity. We’ll also learn the general
structure of a ShaderLab program, Unity’s own language for
shader creation.

Then, we’ll dive into Fragment shaders and study the


difference between a fragment shader and a vertex shader. We’ll
talk about colors, RGB color representation, UV mapping, and
we’ll write a few basic shaders, from a simple solid-color shader
to a textured shader with movement.

Finally, we’ll discuss blending modes and how we can rely on


them to mix between two textures, between a texture and the
screen, and how to make sprites transparent.

2. Illumination & Shadows

In the “Illumination and Shadows” book, we’ll focus on figuring


out different techniques to give life to our games through the
use of illumination. We’ll cover the most basic and widespread
techniques for static lights and shadows, which will give us an
easy and cheap way to create an environment that integrates
with our characters.

We’ll also cover dynamic 2D lighting. With the aid of


specifically-crafted normal maps, we can rely on existing 3D
lights that create interactive sources of lights that will provide a
really amazing look to our games.

3. Procedural Texture Manipulation

In this book, we’ll dive into how we can modify and mix
existing textures to create amazing effects or animations inside
our shaders. This will give you the tools to be able to implement
things that were unthinkable before! You’ll be able to create
some awesome animations seen in After Effects which would
otherwise be imposible, using several techniques including sine
waves, smoothsteps, color offsetting/chromatic aberration and
more.

4. Full-Screen Effects

In this book, I’ll introduce you to a widely used technique


where you apply a shader to the rendered screen. In this shader,
you can use all the techniques from the other books to achieve
amazing looking full-screen effects.

We’ll make a special emphasis in implementing a Bloom


effect from scratch, Camera Shake, Retro-looking effects, and
more.

Downloading the source code for the exercises

All the source code for the exercises can be found on GitHub,
with MIT License (so you can actually use it in your project,
except for the assets).

This book’s repository is


https://github.com/hiddenpeopleclub/2dshaders-book-
illumination-exercises

If you are familiar with git, you can clone the repository as
usual. If you don’t know anything about git or don’t want to
install it, you can download a zip file containing all the files from
https://github.com/hiddenpeopleclub/2dshaders-book-
illumination-exercises/archive/master.zip.
Introduction to 2D Illumination

Why illumination?

I don’t believe there is much need to make a case on why you


should want illumination in your game. That said, I wanted to
spend some time discussing the benefits of these techniques.

First of all, artists use lighting to provide depth to their


drawings. It makes sense that if artists do this with their static
drawings, we should help achieve the same thing in an
interactive environment. The techniques in this book will give
you some tools to help artists in this task.

From a game design perspective, illumination is a tool that


can be used for gameplay (think fog of war for example) or to
cue players about important events or areas within the game.

A third benefit, the pillar of the book series, is that good


illumination can make your game stand out.

These are just a few examples of the many uses and reasons
why you would want illumination in your game. It’s a good idea
to understand these techniques and make use of them when
the need arises.

Hope you have a great time learning to illuminate 2D scenes!


Let’s go!
Static Illumination
The discussion about Blending Modes in the Foundations
book left open an interesting topic, the use of Additive and
Multiply blending modes to create static illumination and
shadows, respectively. In this chapter, we’re going to analyze
how this works, and the sorts of effects that can be achieved,
and its limits. The big benefit is that these are techniques that
are cheap in terms of programming time and CPU consumption
(except when you mix it with particles, but that’s another topic!).
The downside is that, if used excessively, they’ll consume a
decent amount of memory and, maybe, artist time. But that
said, static illumination is the first step towards creating great
looking ambients.

If you have some 3D background already, this technique is


analogous to Light and Shadow mapping. The difference is that
the light and shadow maps are going to be created by hand by
an artist and applied to all the geometry behind the map.

Using Additive blend to light things up

An interesting consequence of the math behind Blending


Modes is the ability to ”illuminate” an area using Additive blend.

Just think about it, if we have a certain color in our sprite, say
(0.5,0,0,1) (half-red). and we add another color on top, say (0.2,
0.5, 0.3, 1) , it will always be closer to white, in this case (0.7, 0.5,
0.3, 1) . That’s, basically, illuminating it, because we’re always
increasing the lightness of a color when we add another one on
top.
Let’s analyze what happens numerically. If we add black
(0,0,0,1), we do nothing, since zero is the neutral element of the
add operation. If we add white (1,1,1,1) we end up hitting the
maximum possible color in all three channels, which makes
sense, since white is as bright as one can go.

All the values that you can get by adding colors that are not
black and white generate a colored illumination effect on top of
your sprite.

Let’s see some examples. Download the example assets for the
book series and look for the assets related to this chapter. Grab
AdditiveLights.png and load it in your Unity project.
Then, we’ll set up a material with an Additive shader and use
this texture as the _MainTex property (all these things were
covered in the Foundations book, if you don’t remember how to
do this, go to the last section in this chapter).

You should end up with something looking like this:


Let’s now pay attention to the assets below those lights. When
we move them, we notice that they are colorized, as if it were
below a spotlight or other kind of light. This is pretty powerful for
the simplicity of the technique. You can repurpose a lot of assets
just by changing the lighting in this way, just by adding a subtle
red to a scene you can make it look more daunting, or you can
tip players about something happening close to where they’re
standing.
Oh, another thing, this could also be animated, from the
shader or outside, to create more interesting effects. You could
simulate a candle flickering light with 2 lines of shader code (you
can learn how to do this in the Procedural Texture Manipulation
book).

Particles

This technique is usually used (and in my opinion lazily


abused) to its limits in combination with particles systems. You
know what I’m talking about, all those flashy energy blast effects
you see on RPGs, MOBAs and other action games are achieved
using this technique.

In Unity, it’s just a matter of changing the Default-Particle


material (which uses a shader called Particles/Alpha Blended
Premultiply ) to some material that uses additive blending mode
(Unity bundles at least two Particles/Additive and
Particles/Additive (soft) ). This is a great tool to create stunning
effects, especially when they are magic/energy related, sparkles,
etc. You can even use it to create more subtle effects too, like a
firefly swarm in the background of a forest or stuff like that. Your
imagination is the limit as they say.

Refreshing how to implement Additive

In order to achieve Additive blending mode, we had to add the


following line after the SubShader command:

Blend One One

And now you can start statically illuminating the scene.


Using Multiply blend to create shadows

The counterpart to illuminating with Additive is creating


shadows using a Multiply blending mode. Similarly to what we
did with lights, we can analyse what happens numerically when
we multiply two colors.

Remember that each color channel can go from 0 to 1. If the


color we’re multiplying to has 0 in a channel, the multiplication
result in that channel will be zero. So, we’re removing the color
on that channel.

If we multiply by something greater than 0, but lesser than 1,


we’ll reduce the amount of color that we had previously.
Imagine we have a full red color (1,0,0,1), and we multiply it by a
half-gray (0.5, 0.5, 0.5, 1), then we’ll end up with a half-red (0.5, 0,
0, 1). Which is closer to black, so we effectively darkened the
image.

And the last case is when we multiply by one. You can imagine
what happens, right? We do nothing. The color remains as it was
before.
One interesting technique that is a consequence of these
properties of multiplication and their impact on colors is
creating grayscale images that create shadow masks.

How do we do this? Simple, you start by creating a white


image. We know that white won’t modify the color when
multiplying, so this is effectively a neutral mask (it does nothing).
We can then paint over the shadows using grayscale colors
pretty much as we did with the lights. The closer to black that
we paint, the darker the shadow will be. You can also tint
somehow if you use colors instead of grays. Take a look at the
following image:
If we apply this to a scene, we can make the character and the
surroundings darken.
Refreshing how to implement Multiply

In the Foundations book’s Exercise 2 I asked you to implement


Multiply, along with other blending modes.

The solution to the Multiply part was adding the following line
after the SubShader command:

Blend DstColor Zero

This comes from the definition of the Blend command where


using:

Blend SrcFactor DstFactor

Represents the following formula:

FinalColor = SrcColor * SrcFactor + DstColor * DstFactor


If you want the FinalColor to be SrcColor * DstColor , you’ll have to
make SrcFactor equal DstColor , and DstFactor equal Zero . (You
could do it the other way around too). That’s it. You can now add
static shadows too!

Limits of static illumination

You can go miles with these two simple techniques we just


saw. You should be able to create different kinds of atmospheres
combining static lights and shadows. This can be used to
successfully tease the player where important things are, create
some static light occlusion behind/below objects, and other
things like these. But the main limitation I see for these
techniques is that you can’t make the lights and shadows be
aware of the shape or internal textures they’re influencing. You
can’t, for example, make the light create highlights in a player’s
belt or stuff like that. The illumination and shadows layers
behave in a uniform way, applied all the same to whatever asset
is behind them.

We’ll have to do something more complex in order to achieve


this, and that’s what we’ll do in the following chapters.

But first I wanted to say something that is important: even if


we’re discussing Additive and Multiply in the “Static
Illumination” chapter, these blending modes consequence is the
basis of all the illumination/shadowing we’re going to see in the
following chapters. So, whenever we want to illuminate
something, we’ll add a color to it, and when we want to cast a
shadow, we’ll multiply.

Exercise 1
In this exercise, you’re going to experiment a bit with lights
and shadows. Open up Exercise 1 - Lights and Shadows .

You’ll find 4 objects that are supposed to be used as lights or


shadows, those are WhiteLight , GreenLight , BlueShadow and
BlackShadow .

You’ll need to make those objects behave as expected. Lights


should illuminate the sprite and shadows should darken it. But
all this needs to be done using the specified colors. Additionally,
add an “Intensity” slider to use as an easy way to control the
effect.

Conclusion

Back in 2008, when I first saw these techniques, they blew my


mind. The amount of variety I could get by just adding or
multiplying colors was really amazing.

The important thing about these techniques is that they can


be applied to whatever is happening on screen, and in that way
create a more dynamic-looking environment that gets
influenced by light and shadow.

Hope you enjoyed it and prepare for the next chapters, where
we’ll analyze other amazing techniques!
Kinda Dynamic Local Ambient Lights
In this chapter, we’re going to create a simple but effective
system to simulate ambient lights in our characters and objects.

We’ll create three different types of lights, Ambient,


Directional and Point lights. Each of them will have different
settings and behaviors.

We’ll also see how to enable the use of more than one light
interpolating them when the character changes its position.

These techniques are based on an article by Oliver Franzke


that you can find in Gamasutra. Go ahead and read it for a gentle
introduction to what we’ll be doing in this chapter and the next
one.

Creating our own 2D Light Sources

Before getting into illumination itself, we want to have a few


GameObjects that will act as our own 2D light sources. For all
the different light types, we want to define a light and shadow
color, that will be multiplied on top of our character. The
Directional light will have a rotation angle, that we’ll use to
define the light’s direction, and our point light will have a Range,
that we’ll use to hack something that looks like the character is
moving close to a point light. Each GameObject that has a light
should have a Collider as well so that we can define the area of
influence of that light.

Ambient
Let’s begin creating a new script in a folder called
Scripts/Lights . We’ll use the namespace Lights to avoid name
collisions.

The only two values we want for our ambient lights are
LightColor and ShadowColor . These will be used by the
TextureWithAmbientLight shader in the next section to colorize the
character.

using UnityEngine;
namespace Lights
{
public class Ambient : MonoBehaviour
{
public Color LightColor;
public Color ShadowColor;
}
}

Directional

The only difference between the Directional light and the


Ambient lights is that the light is coming in an angle, so we’ll
add it to the Directional script.

Note that I’m adding a Range directive so that we go from


zero to 360 degrees. You could also set a Vector2 here, and
calculate the rotation yourself, if you think a direction is clearer
than an angle it’s OK, that’s up to you.

namespace Lights
{
public class Directional : MonoBehaviour
{
[Range(0,360)]
public float Angle;
public Color LightColor;
public Color ShadowColor;
}
}

Point

The only value (besides the colors) that we want to expose for
Point lights is the Range . We use this to set the radius of the
CircleCollider used for this light.

namespace Lights
{
public class Point : MonoBehaviour
{
public Color LightColor;
public Color ShadowColor;
[Header("Range (In Units)")]
[Range(0,100)]
public float Range;

private void Start()


{
CircleCollider2D c =
GetComponent<CircleCollider2D>();
c.radius = Range;
}
}
}

Creating the Lights prefabs

Now that we’re all set with the scripts, we’ll create three
prefabs. We want our Ambient and Directional lights to have a
BoxCollider2D each, you’ll define the area of influence by
modifying the collider size. And the Point Light should have a
CircleCollider2D , as seen before, the area of influence is handled
by the Range value. The three colliders should have Is Trigger set
to true because we don’t want them to physically collide with
our objects.
Once you have these GameObjects set, drag them to the
project panel to create prefabs for each one of them. Once we
have the code in place, you’ll be able to just drag and drop the
lights in your scene and they should work automatically.

Object coloring

Let’s create a new shader and call it 2D


Shaders/TextureWithAmbientLight

Shader "2D Shaders/TextureWithAmbientLight"

We’ll want to add two properties to it: _LightColor and


_ShadowColor with defaults white and black respectively.

Properties
{
_MainTex ( "Main Texture", 2D ) = "white" {}
_LightColor ( "Light Color", Color ) = (1,1,1,1)
_ShadowColor ( "Shadow Color", Color ) = (0,0,0,1)
}

The Blending mode for the shader should be Alpha Blend.

Blend SrcAlpha OneMinusSrcAlpha

And don’t forget to add the definition for _LightColor and


_ShadowColor as fixed4 inside the CGPROGRAM tag.

sampler2D _MainTex;
fixed4 _LightColor;
fixed4 _ShadowColor;

The vertex shader is the default for the typical Texture shader
created in the Foundations book, but we need some changes in
the fragment shader.

The effect we want to achieve is a colorization that creates a


gradient between the LightColor on top and the ShadowColor in the
bottom. For this, we’ll multiply a linear interpolation of these
colors by the original image. If you’re not familiar with linear
interpolation, please refer to the Appendix I of the Foundations
book.

As the weight for the linear interpolation, we’re using the y


value of the UV. In this way, we get the ShadowColor in the bottom
(when y is closer to 0 ) and LightColor in the top (when y is closer
to 1 ).

fixed4 frag (v2f i) : SV_Target


{
fixed4 col = tex2D(_MainTex, i.uv);
return col * lerp(_ShadowColor, _LightColor, i.uv.y);
}

When we apply this shader to an object, we should see


something like this:
A note on Culling

In 3D graphics, there is an optimization technique called face


culling. The idea is that faces that are not visible because they’re
rendered backward (or forwards, depending on the setting), are
ignored when rendering. This is a pretty common technique
used in every single 3D engine. When working with 2D this has
an important implication. If we’re flipping our sprites by
multiplying its scale by -1, we’ll be rendering the quad backward,
and if we have face culling on, our sprite will simply not render.
Because of this, we want to set face culling off in our sprite
shaders. You can do this using the Cull command inside a Pass
block:

Pass
{
Cull Off

Full code for the shader

Shader "2D Shaders/TextureWithAmbientLight"


{
Properties
{
_MainTex ( "Main Texture", 2D ) = "white" {}
_LightColor ( "Light Color", Color ) = (1,1,1,1)
_ShadowColor ( "Shadow Color", Color ) = (0,0,0,1)
}

SubShader
{
Blend SrcAlpha OneMinusSrcAlpha
Pass
{
Cull Off
ZWrite Off

CGPROGRAM
#pragma vertex vert
#pragma fragment frag

struct appdata
{
float4 position : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f
{
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
};

sampler2D _MainTex;
float4 _LightColor;
float4 _ShadowColor;
float _Threshold;
float _Smoothness;
float _Angle;

v2f vert (appdata v)


{
v2f o;
o.position = UnityObjectToClipPos(v.position);
o.uv = v.uv;
return o;
}

fixed4 frag (v2f i) : SV_Target


{
fixed4 col = tex2D(_MainTex, i.uv);
return col * lerp(_ShadowColor, _LightColor, i.uv.y);
}
ENDCG
}
}
}

This is the base of all the work we’ll do in this chapter, so let’s
see how we can make this more sophisticated.

Ambient Lights

So, we’re now ready to start working on our simplistic lighting


model. First of all, we’ll analyze how we can create ambient
lights. This type of light is the simpler of the three we’ll see in this
chapter, and the rest will be just upgrades on the same
technique.

The goal of this section is to create a Prefab that acts as a


trigger volume that, when the player (or other objects!) move
into it, tints them with a gradient. We’ll use the shader we
created in the previous section in combination with a C# script.

Using the TextureWithAmbientLight shader in our character


Let’s create a script called SpriteAmbientLight.cs . This script will
be used to set up a material to apply the TextureWithAmbientLight
shader to our sprite.

We want to be able to set the Light and Shadow colors in the


Editor, for that we’ll create two public variables, LightColor and
ShadowColor .

public class SpriteAmbientLight : MonoBehaviour


{
public Color LightColor = Color.white;
public Color ShadowColor = Color.black;

These are the colors we’ll pass to the character’s shader. To


achieve that you want to have a handle on the sprite material, so
we’ll add another (private) variable to hold that.

public class SpriteAmbientLight : MonoBehaviour


{
public Color LightColor = Color.white;
public Color ShadowColor = Color.black;
Material material;

Let’s now get that material during our object’s Start callback:

private void Start()


{
SpriteRenderer spriteRenderer = GetComponent<SpriteRenderer>();
material = spriteRenderer.material;
}

So, now that we have our material, we’ll change the sprite’s
color when we encounter a light. We do this by defining Unity’s
2D trigger callback OnTriggerEnter2D .

private void OnTriggerEnter2D(Collider2D collision)


{
Lights.Ambient ambientLight =
collision.gameObject.GetComponent<Lights.Ambient>();

if(ambientLight != null)
{
LightColor = ambientLight.LightColor;
ShadowColor = ambientLight.ShadowColor;
}
}

What we do in this script is straightforward, if we trigger an


object that happens to be a Lights.Ambient , we then set the colors.

Now, in our Update callback we can just set that:

private void Update()


{
material.SetColor("_LightColor", LightColor);
material.SetColor("_ShadowColor", ShadowColor);
}

And that should be it. Remember, for this to work, your sprite
needs to have a material with the TextureWithAmbientLight shader
applied to it.
If we place two lights and move the character between them,
the coloring should change. The colliders you see in the
following image are each light’s colliders.
Combining this with static illumination in the whole area
already gets you miles away from most games that won’t do any
processing in their sprites’ illumination.

You may notice I’m updating the color every frame, and yes,
there is no need for that, if you were going to stop adding
features here, you could pretty much set the colors inside
OnTriggerEnter2D instead of storing the color and then changing it
during Update . But we’re not stopping here, hell no! Just wait for
it…
Controlling the gradient shape with _Threshold and _Smoothness

There is a small tweak to the shader that you can do that will
give you some control over the gradient. When designing the
gradient, you have two parameters that you can change, and
those are where the gradient starts and when it ends. We can
define that using two properties: _Threshold (where it starts) and
_Smoothness (the length).

First of all, we need to define both properties in the


TextureWithAmbientLight shader.

_Threshold ("Threshold", Range(0,1)) = 0.3


_Smoothness ("Smoothness", Range(0,1)) = 0.3

As usual, you want to define the variables in Cg as well.

float4 _LightColor;
float4 _ShadowColor;
float _Threshold;
float _Smoothness;

v2f vert (appdata v)


{

Now, instead of using the regular linear interpolation ( lerp


function), we’ll use another function that is a little bit more
powerful, it’s called smoothstep . You will learn several uses for
smoothstep in the Procedural Texture Manipulation book, but,
since it’s required here I’ll do a quick introduction.

Smoothstep is a function that receives three parameters, a


lower threshold, an upper threshold and a weight.

float result = smoothstep(low, high, weight);


If weight is below the lower threshold, smoothstep returns 0 . If it’s
above the upper threshold, it returns one. If it is between the
lower and upper threshold, it returns a linear interpolation
between zero and one.

We could achieve the same effect with our previous lerp


function too and scale the weight to match our thresholds with
some math on our uvs, but since smoothstep exists why not
using it, right? What you want to do is replace the return line by
this:

return col * lerp(_ShadowColor, _LightColor,


smoothstep(_Threshold, 1-_Smoothness, i.uv.y));

If we set _Threshold to, say, 0.5 and _Smoothness to 0.2 , our


gradient will run from 0.5 to 0.7 . Below 0.5 it will use _ShadowColor ,
above 0.7 it will use _LightColor , and a linear interpolation
between them in the middle.

You can play with different configurations for these


parameters, and you can even extend the scripts to set it in a
light-by-light basis if you want. We’ll use it in a dynamic way
soon when implementing our Point Lights. But for now, we’ll
leave it there, and let’s move on to Directional Lights.

Exercise 2: Directional Lights

In this exercise, we’re going to create a new type of light: the


Directional Light.

What we expect from this type of light is that we can set an


angle to it and the gradient in the character will rotate.

Open up Exercise 2 - Directional Light and review


TextureWithDirectionalLight.shader and SpriteDirectionalLight.cs .

If you pay attention you’ll see that some of the work is already
done for you. The angle gets passed in radians to the shader.
This is done by the following line in Update

material.SetFloat("_Angle", Mathf.Deg2Rad * Angle);

We multiply Angle by Mathf.Deg2Rad to transform from degrees to


radians which we’ll need for the rotation to work properly.

In this shader, you’ll have to rotate the gradient. Be sure to


read the Vector Rotation appendix if you don’t know how to
rotate vectors in 2D. The result should look something like this:
Point Lights

For the Point lights, we want to create a type of light that has a
given range (in our case we’ll use a circle collider to define it) and
a position. We’ll modify the smoothness and direction of the
gradient using the point light position as a reference.

Using the TextureWithPointLight shader in our character


The base shader for the Point light is the same as the
Directional light with a smoothness factor. We already covered
smoothness in the Ambient lights, so check the section on
Ambient Lights to see how smoothness works. The only change
you have to do to the TextureWithDirectionalLight shader is adding
the _Smoothness code to it.

fixed4 frag (v2f i) : SV_Target


{
fixed4 col = tex2D(_MainTex, i.uv);
return col * lerp(_ShadowColor, _LightColor,
smoothstep(0, _Smoothness, i.direction.y)
);
}

Adding Point light logic

Create a Script called SpritePointLight.cs and copy the code for


the SpriteDirectionalLight.cs . We’ll add a Range field and a private
variable called currentLight where we’ll store the lights we’re
currently being affected by.

public class SpritePointLight : MonoBehaviour


{
public Color LightColor = Color.white;
public Color ShadowColor = Color.black;
public float Distance;
public float Range = 0;
public Vector3 Direction;
[Range(0, 360)]
public float Angle = 0;
Lights.Point currentLight;

Then, we want to set those in OnTriggerEnter2D , so they get set


every time we enter a light. We also want to clear the current
light in OnTriggerExit2D .
private void OnTriggerEnter2D(Collider2D collision)
{
currentLight = collision.gameObject.GetComponent<Lights.Point>();
LightColor = currentLight.LightColor;
ShadowColor = currentLight.ShadowColor;
Range = currentLight.Range;
}

private void OnTriggerExit2D(Collider2D collision)


{
currentLight = null;
}

With that, we have all the data we need from the game. Now
let’s see how we use it in the Update function.

private void Update()


{
if(currentLight != null)
{
material.SetColor("_LightColor", LightColor);
material.SetColor("_ShadowColor", ShadowColor);

// We calculate the angle of the direction to the y axis


Direction = transform.position - currentLight.transform.position;
Distance = Direction.magnitude;
Direction.Normalize();

Angle = Mathf.Acos(Vector3.Dot(Direction, Vector3.down));


Vector3 cross = Vector3.Cross(Direction, Vector3.down);
if (Vector3.Dot(Vector3.forward, cross) < 0)
Angle = -Angle;

material.SetFloat("_Smoothness", Distance / Range);


material.SetFloat("_Angle", Angle);

spriteRenderer.material = material;
}
}

First, we check if we have a point light set using currentLight . If


we do, we set the colors, as we did previously. We then use the
sprite position and the currentLight position to calculate a
direction vector for the light. And with that direction, we can
pass an angle to our shader to rotate the gradient as we did in
the Directional light.

Last, we define the smoothness to be the Distance divided by


the Range of the light. If we’re close to the light (Distance <
Range), this value will be close to 0, so the gradient will be thin,
when we’re far from the light, the gradient will be 1 (when
Distance == Range) or more (when Distance > Range).

This should be it, if you set everything up correctly you should


now see that the light changes the direction when the character
moves and that the smoothness increases or decreases
proportionally to the distance of the light.

You could get more sophisticated with this, moving also the
threshold or stuff like that, but I guess this is enough to illustrate
the principle on how these lights work.

Now let’s see how to combine more than one light.

Interpolating between lights

The goal of this section is to add some code to allow for


transitioning between two lights. So that we have a smooth
transition when we change environments.

To simplify things we’re going to analyze interpolation only


with Ambient lights, but you can use this with the other lights
too, you’ll just have to interpolate more values. For this, we’ll do
some modifications to the SpriteAmbientLight class.

First of all we want to add a public variable called


LightTransitionDuration . This is a float variable with a Range
directive for the editor.
public class SpriteAmbientLight : MonoBehaviour
{
public Color LightColor = Color.white;
public Color ShadowColor = Color.black;
[Range(0,5)]
public float LightTransitionDuration = 1f;

Whenever we collide with a new light, we want to store the


current colors in order to start the transition. For that, we want
to add two private variables, previousLightColor and
previousShadowColor .

Color previousLightColor;
Color previousShadowColor;

When we start the game, we want to set those to the default


colors.

private void Start()


{
SpriteRenderer spriteRenderer = GetComponent<SpriteRenderer>();
material = spriteRenderer.material;
previousLightColor = LightColor;
previousShadowColor = ShadowColor;
}

On top of that, we also want a private variable that we’ll use as


a timer, to store the current step on the transition at every
frame. We’ll call it lightTransitionTimer .

float lightTransitionTimer = 0;

When we collide with a light, we want to set this


lightTransitionTimer to the value we have in LightTransition .
Whenever lightTransitionTimer is bigger than 0 , we’ll do a step in
the transition and subtract Time.deltaTime , until we reach 0 again.
private void OnTriggerEnter2D(Collider2D collision)
{
// ...
lightTransitionTimer = LightTransitionDuration;
}

Now, let’s analyse the Update callback.

private void Update()


{
if(lightTransitionTimer > 0)
{
lightTransitionTimer -= Time.deltaTime;

if(lightTransitionTimer < 0)
lightTransitionTimer = 0;

float weight = 1 - (lightTransitionTimer / LightTransitionDuration);


Color currentLight = Color.Lerp(previousLightColor, LightColor, weight);
material.SetColor("_LightColor", currentLight);
Color currentShadow = Color.Lerp(previousShadowColor, ShadowColor, weight);
material.SetColor("_ShadowColor", currentShadow);
}
}

If lightTransitionTimer is higher than zero, it means we collided


with a light and we’re transitioning.

The first thing we do is updating our timer and ensure that it


won’t go below 0 .

Then, since we want to interpolate between the previous and


current colors, we need to use Color.Lerp with the previous and
current colors and a weight value that moves between 0 and 1 .
When weight is 0 , the linear interpolation will return the previous
color, when it is 1 , it will return the new color.

When we start the transition after colliding with a light, we


know that lightTransitionTimer is equal to LightTransitionDuration .
The division between them is 1 . When lightTransitionTimer is 0 (at
the end of the transition), the division will be 0 too. In order to
return the correct values, we need these values to be the other
way around. Subtracting the division result value from 1 is
enough. You could also invert the order in which the colors are
passed to the Color.Lerp method, it’s up to you the only
difference is one subtraction operation, nothing to worry about.

You can now duplicate a light, change its values and move it.
Then hit play and move the character around. You’ll see how the
color is interpolated when you move between the lights’
colliders.

Conclusion

In this chapter, we implemented a technique based on Oliver


Franzke’s post about how dynamic ambient lights are
implemented in Double Fine’s Broken Age. We extended the
technique to include two types of lights, Directional and Point,
with their own definition and logic. We also showed how we can
interpolate between lights, to create ambient light transitions in
the environment. You could use this to allow the player to
smoothly go between an illuminated and a dark area, for
example.

I hope you enjoyed this chapter, and you’re already making


some cool scene ambient lights in your own game. Now we’ll
dive into a more subtle but really useful effect: Rim Lights.
ScreenSpace Automatic Rim Lights
In this chapter, we’re going to implement another technique
inspired by Oliver Franzke article on dynamic lights. It’s a subtle
effect but that includes quite a few intermediate techniques
that are crucial to learning when working on shader
programming.

We’re going to create a slight light in the borders of our


characters. These lights are usually called Rim Lights, and it’s a
term that photographers and filmmakers use to define lights
that come from behind or the sides, and it’s used to detach a
subject from the background.

We’ll learn how to render the screen to a texture with a special


shader that will only render the alpha of the characters on
screen. We’ll also create a normal map from that texture and
then use that normal map combined with a simple lighting
equation to create the rim lights. Sounds complicated? Is not
that hard, just a lot of new topics to cover.

One more thing before starting. I usually try to give examples


that are as much production ready as I can, but this chapter is
not necessarily the case. Creating a fully fledged, optimized and
integrated version of this effect would take a little bit more work
than what I’m comfortable writing here. You may find places in
this chapter where you may think that it would be more optimal
doing something somewhere else or in some other way. Yes,
there are things that can be optimized, but I’m approaching this
chapter as a communication act with a person and not a
computer, because of that I’m optimizing for human
comprehension and not computing speed.

This chapter includes topics that if you’re not already familiar


with (Off-screen rendering, Normal Mapping, Light Models) it
can get a little bit challenging. I didn’t want to make this worse
with cryptic code that is hard to follow. I hope you agree with
this approach and I invite you to jump into the forums with your
ideas on how to extend/optimize this technique, I’d love to
analyze them with you.

That said, let’s go.

Rendering Alpha Only

The first step in our journey is to render the Alpha channel of


the characters (and objects you want your RimLights to render
on). We’ll use this to create a Normal Map later.

Scene Setup
Since our goal is to render the alpha channel of some objects
in the scene (and not each and every one of them) we’ll need to
do a selective rendering of those objects. This can be achieved
by limiting what Layers the camera renders.

Let’s select our character and click on Layer, then Add Layer.

In one of the available User Layers, write RimLights.


Then click on the character again, and go to Layers and select
RimLights.
Since we only want to render the Alpha channel of these
objects, we need to remove (Cull) all the objects that are not in
this layer.

Go to the Main Camera, click on Culling Mask and select


RimLights.

With this last step, you effectively prevent every object that is
not in the RimLights Layer from rendering on this camera. You
can see this by hitting play or checking the camera preview.

In the following image, the right character is in the RimLights


layer, the left one is not, so we’re effectively removing it from the
camera render.
Rendering to a RenderTexture

I guess we already agree that we don’t want the player to see


all this Alpha / Normal generation thing on screen. Because of
that, we’re going to use a technique that redirects the stuff we
render to a texture instead of using the screen. In the
background, we’re creating a new buffer and writing to it,
instead of the frame buffer.

But in Unity, we don’t have to deal with that, since it’s already
abstracted using the RenderTexture class. If you’re not using Unity
you may need to do some work to get this working. If you’re
working with OpenGL search for glFramebufferTexture and learn
how to use it.

Let’s create a new script called ScreenSpaceRimLights.cs and add it


to the camera. In it, we’re going to add a RenderTexture public
variable and set it on Awake . We’ll need to get ahold of the current
camera to set the width and height of the image.

new Camera camera;


public RenderTexture Alpha;

private void Awake()


{
camera = GetComponent<Camera>();
Alpha = new RenderTexture(camera.pixelWidth, camera.pixelHeight, 24);
camera.targetTexture = Alpha;
}

A few notes on this script: * The camera variable requires a new


directive before the type because Unity defines a camera
variable in MonoBehaviour , but we want to override it. * We’re
setting the width and height of this texture to the size of the
camera on Awake . This means that this may break if we change
the size of the window in runtime. Keep this in mind if you’re
going to use this technique in your game. In general, take this
into account whenever you write Screen Space effects that
require offline rendering of the whole screen. In the last line of
the Awake callback, we’re setting the new RenderTexture as the
target texture for the camera. This means that whatever this
camera renders will not go to the screen, but to this texture
instead. If you reset camera.targetTexture to null , you’ll render to
the screen again.

You can review the effect of the script by hitting play now.
Click on the camera, look for Alpha in the inspector and double-
click it.

Rendering only the Alpha channel

Now, we want to create a new shader that will only render the
alpha channel. I’d expect you already know how to do this if
you’re lost on how to create such a shader, go back to 2D Shader
Development Book 1: Foundations and review the Texture shader.

Duplicate the Texture shader, rename it to TextureWithAlphaOnly


and modify the fragment shader so that it only renders the
alpha channel in all the color channels.
fixed4 frag (v2f i) : SV_Target
{
fixed col = tex2D(_MainTex, i.uv).a;
return fixed4(col,col,col,1);
}

We know that we want to render all the objects through this


shader. You may think that what we need to do is to have a
material that uses is, and replace the materials of all these
objects, do the rendering, and go back to the original material.
This is a lot of work and is not the way to go.

Instead, we’re going to use a useful setting of the Camera class


called Replacement Shader. When the Replacement Shader is
set, the camera will use that shader to render all the objects it
gets. This is extremely useful in this case, where we don’t want to
render the objects with different materials, but a single material
for all of them. In the ScreenSpaceRimLights class, add a Shader
public member and assign the new shader to it.

public Shader TextureWithAlphaOnly;

Then, we need to add the following line to the Awake method.

camera.SetReplacementShader(TextureWithAlphaOnly, null);

Here we effectively set the Replacement Shader to


TextureWithAlphaOnly . If you now check the RenderTexture that gets
generated, you’ll notice that we rendered the silhouette of our
character.
That’s it, we now have what we needed. Let’s keep going and
see how to obtain the Normals.

Calculating Normals

The next step is creating a Normal Map using the alpha-only


texture we just created. If you have no idea what a Normal Map
is, please refer to the Normal Mapping Appendix of the book
now. This section will assume you know what a Normal Map is
and how it works.

In the previous section, we ended up with a grayscale image


that renders the contour of the sprites we want to add rim lights
to. We’ll blur that image in order to create a round-like shape in
the borders of these sprites, then we’ll use that blurred image as
a Bump Map to generate our Normal Map by analyzing the
difference in height between each pixel (mathematically, we’ll
be calculating a gradient in x and y).
Blurring the image

The first step then is to blur the image. For this, we’re going to
convolute the image with a Gaussian Kernel. Refer to the
Convolution Appendix to help you understand this process if
you’re not familiar with convolution yet. Convolution is also
deeply analyzed in the Procedural Texture Manipulation Book of
the series.

Let’s duplicate the Texture shader now and call it Blur . Remove
the code for the fragment shader.

First of all, we know we want to convolute the image we have


with a Gaussian Kernel. We can do two things here. We could
multiply by a 9x9 kernel in one shader pass or do two passes,
one vertical and one horizontal.

We’ll do the second option since it’s the most common. For
this, we’ll use the Pass command from ShaderLab twice,
duplicating the shader code, once for vertical and one for
horizontal. Take a look at the finished shader below to see this.

For the convolution, we need to access texels that are close to


the texel corresponding to the current UV. Whenever you have a
Texture Property, Unity provides a built-in struct that you can
define called [TextureName]_TexelSize , in this case, we’ll use
MainTexTexelSize . This struct provides you with a texel size that is in
the same scale as the UV (From 0 to 1, instead of 0 to the texture
height or width). This is pretty handy for this case, where we
want to grab the nearby texels.

What we’ll do is offset the UV by a multiple of _MainTexTexelSize .


For example in the vertical pass we’ll do something like:

tex2D( _MainTex, i.uv + float2(0, MainTexTexelSize.y * offset)


With offset being the distance to the pixel (top or bottom) that
we want to move to. Now let’s take a look at the convolution
code.

fixed4 frag (v2f i) : SV_Target


{

#define sample_and_weight(weight,offset) tex2D(


_MainTex,
i.uv + float2(0, MainTexTexelSize.y * offset)
) * weight;

fixed4 sum = fixed4(0,0,0,0);

sum += sample_and_weight(0.000229, -4.0);


sum += sample_and_weight(0.005977, -3.0);
sum += sample_and_weight(0.060598, -2.0);
sum += sample_and_weight(0.241732, -1.0);
sum += sample_and_weight(0.382928, 0.0);
sum += sample_and_weight(0.241732, +1.0);
sum += sample_and_weight(0.060598, +2.0);
sum += sample_and_weight(0.005977, +3.0);
sum += sample_and_weight(0.000229, +4.0);

return sum;
}

We’re using #define to create a macro and inline the sampling


and weighting to avoid repeating a lot of code. You could use a
function as well, but this is more succint for me.

As we mentioned previously we’re sampling the texture with


tex2D and offsetting the UV with a multiple of MainTexTexelSize .
Then, we multiply it by the weight and we sum the results of all
those weighted samples. The weight values are the indices for a
9x1 Gaussian convolution kernel. You can use a smaller or bigger
one if you want too.

For the horizontal pass, you only need to change the position
of the offset, and use MainTexTexelSize.x .
fixed4 frag (v2f i) : SV_Target
{
#define sample_and_weight(weight,offset) tex2D(
_MainTex,
i.uv + float2(MainTexTexelSize.x * offset, 0)
) * weight;

fixed4 sum = fixed4(0,0,0,0);

sum += sample_and_weight(0.000229, -4.0);


sum += sample_and_weight(0.005977, -3.0);
sum += sample_and_weight(0.060598, -2.0);
sum += sample_and_weight(0.241732, -1.0);
sum += sample_and_weight(0.382928, 0.0);
sum += sample_and_weight(0.241732, +1.0);
sum += sample_and_weight(0.060598, +2.0);
sum += sample_and_weight(0.005977, +3.0);
sum += sample_and_weight(0.000229, +4.0);

return sum;
}

Now we have the shader ready. Create a material called Blur


and choose that shader.

We’ll now add some logic to apply the Blur shader to the
whole screen. The specifics of how to work with full-screen
effects are covered in the Full-Screen Effects book, but we’ll
discuss the bare minimum here so that you understand the
technique.

Unity provides us with a callback called OnRenderImage that is


called after a camera finished rendering the scene.

private void OnRenderImage(RenderTexture source, RenderTexture destination)


{
}

The method receives a source texture (the texture as rendered


by the camera) and a destination texture (the texture where we
have to place the result of our processing. Go ahead and add
that method to ScreenSpaceRimLights.cs and see what happens.

Since we’re not copying the source texture into the destination
texture we broke the expected flow of the data and the
framebuffer ends up empty, thus we see a back screen.

Unity also provides a method called Blit in the Graphics


namespace to which we can write the source texture into the
destination texture.

private void OnRenderImage(RenderTexture source, RenderTexture destination)


{
Graphics.Blit(source, destination);
}

Now let’s hit play again, and the image should be back.

You may now be asking yourself: what the heck does this have
to do with blurring? Good question. Blit has more than one set
of parameters that we can pass to it. One of those sets includes a
material, and this is where everything changes.

Add a Material public variable in the script called Blur and


assign the Blur material we created to it.

public Material Blur;

Now pass that variable as the last parameter to the Blit


method.

private void OnRenderImage(RenderTexture source, RenderTexture destination)


{
Graphics.Blit(source, destination, Blur);
}
You’ll notice the images now look blurred, but it’s not quite
working well, the reason is that, by letting Unity decide which
shader Pass to use, we’re only blurring in one direction.

We’ll need to run both passes to have this blur correctly, so


let’s do that. First of all, we’ll need a temporary RenderTexture to
put the result of the first pass, to then do the second pass.

RenderTexture tmp;

And add an initialization for this in Awake .

tmp = new RenderTexture(camera.pixelWidth, camera.pixelHeight, 24);

Now, we can apply both passes with another version of the


Blit method.

private void OnRenderImage(RenderTexture source, RenderTexture destination)


{
Graphics.Blit(source, tmp, Blur, 0);
Graphics.Blit(tmp, destination, Blur, 1);
}

This is already looking better! We blurred the image


successfully! But it doesn’t look like the blur did much.

If you’re familiar with the Blur effect, you’ll notice that, when
applying it in an image editing software, you have a variable
called Blur Passes or something like that. This is the number of
times we apply the blur processing.

So let’s expose a public int variable called BlurPasses and use it


to loop through one or more blur passes.

[Range(1,5)]
public int BlurPasses = 3;

I added a range to it, so it’s easier to handle in the editor. I


think more than 5 starts to not have such an effect for the
amount of processing required. But go ahead and change the
maximum and play around with it.

We’ll also need another temporary render texture.

RenderTexture tmp;
RenderTexture tmp2;

And initialize it.

tmp2 = new RenderTexture(camera.pixelWidth, camera.pixelHeight, 24);

Now, we can finally write the looped version of the method.


private void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(source, tmp);

for (int i = 0; i < BlurPasses; ++i)


{
Graphics.Blit(tmp, tmp2, Blur, 0);
Graphics.Blit(tmp2, tmp, Blur, 1);
}

Graphics.Blit(tmp, destination);
}

You see now that we’re blitting the source image into the tmp
and then apply several blur passes. Finally we blit the blurred
image into the destination buffer.

Well… This is an indirect level up for you. Now you know how to
blur an image! Congrats.

Generating the Normal map


The next step in this process is actually getting a normal map
from the grayscaled image.

We’re going to create a new shader called ToNormal (this could


be added as another pass in the previous shader, but for the
sake of clarity I’m keeping things separated) and copy the
Texture shader in it…

If you’re not familiar with Normal Mapping and you still didn’t
read the appendix on the topic, it could be a good time to do it
since I explain normal mapping in depth in that appendix. I’ll just
say the bare minimum here.

If we interpret the blurred image as a bump map, we can


generate a normal of the surface by sampling neighbor texels
and calculating the difference in height. By doing that
horizontally and vertically we get the deviation we need to apply
to the normal. So let’s see how the fragment shader should look
like for this

fixed4 frag (v2f i) : SV_Target


{
float x0 = tex2D(_MainTex,
i.uv - float2(MainTexTexelSize.x, 0)).r;

float x1 = tex2D(_MainTex,
i.uv + float2(MainTexTexelSize.x, 0)).r;

float y0 = tex2D(_MainTex,
i.uv - float2(0, MainTexTexelSize.y)).r;

float y1 = tex2D(_MainTex,
i.uv + float2(0, MainTexTexelSize.y)).r;

float dx = x0 - x1;
float dy = y0 - y1;

return fixed4(0.5 - dx, 0.5 - dy, 1, 1);


}
In the first 4 lines, we sample the horizontal and vertical
neighbors respectively, by adding an offset to i.uv in the right
direction. We use MainTexTexelSize as we did when blurring, to be
able to move only one texel in a given direction.

Since this image is grayscale, r , g and b are the same for any
given texel. Because of that, you can only get the r channel in
these samplings.

Now we have the height of the bump in 4 points, 2 for x and 2


for y . With those we then calculate the difference in height in
each coordinate, those are dx and dy . And those are the values
we needed to encode our deviation for the normal.

We then return the color that represents that deviation, by


moving dx in the x component and dy in the y component.

As explained in the appendix, a normal pointing towards the


viewer is encoded by (0.5, 0.5, 1, 1) . And we are deviating that
by subtracting dx and dy .

Now, let’s test this thing by running our Blit function through
it. Create a new material called ToNormal and assign the ToNormal
shader to it.

Then, expose a public variable ToNormal in the


ScreenSpaceRimLights script and assign the material to it.

public Material ToNormal;

Next, we’ll add the material to the final Blit command in


OnRenderImage .

private void OnRenderImage(RenderTexture source, RenderTexture destination)


{
Graphics.Blit(source, tmp);
for (int i = 0; i < BlurPasses; ++i)
{
Graphics.Blit(tmp, tmp2, Blur, 0);
Graphics.Blit(tmp2, tmp, Blur, 1);
}

Graphics.Blit(tmp, destination, ToNormal);


}

And that’s it. We got our normals ready to be used!

Exercise 3: Lighting

The result of the previous section was a normal map that we


can now use to create a diffuse illumination component on top
of our sprites.

Please, review the Lambertian Diffuse Shading appendix if you


don’t know about the Lambertian Cosine Law and would like to
understand how this works.
Also, if you’re not familiar with Model - View - Projection
coordinate systems, also read the Coordinate Systems appendix.

I did most of the work for you in terms of passing the data to
the shader. Check the setup for the Pass.Lights pass.

CurrentPass = Pass.Lights;
camera.SetReplacementShader(RimLights, null);

Vector3 lightPos = PointLight.transform.position;


tmpVector.Set(lightPos.x, lightPos.y, lightPos.z, 0);

Shader.SetGlobalVector("_LightPosition", tmpVector);
Shader.SetGlobalVector("_LightColor", PointLight.LightColor);
Shader.SetGlobalTexture("_Normals", Normals);

camera.targetTexture = null;

Here we use a new replacement shader RimLights that we’ll


code to create a render texture that looks like this:

For the intensity of the light you want to use the following
formula:
NdotL = saturate(dot(normalize(l), normalize(n)))

There are three new functions here:

normalize receives a vector an returns the vector normalized.

dot(a,b) is the dot product between a and b .

And finally, saturate(a) returns a clamped to the range of [0..1] .


If a < 0 then saturate(a) == 0 , if a > 1 then saturate(a)==1 , and if 0 <=
a <= 1 then saturate(a) == a .

I added some comments in the shader to guide you a bit.

When you have that working, I’ll ask you to create a new
shader that will add the light to the rendered scene. This can be
done in several ways, I’ll just show one of them in the exercise
solution.

This exercise may take some time, since it has quite a few
concepts in it, don’t feel bad taking the time needed to fully
understand what’s going on, and be sure to drop a line on
Discord if you’re totally lost!

The final result should look something like this:


Notice that we’re mixing the rim lights with the technique
created in the previous chapter for a nice looking effect.

Conclusion

That was an amazing journey. We started from a simple sprite


and ended up with a fully rim lighted scene. By assuming that
objects have a little rounded edge in the borders (which is
somewhat the case for most characters) we were able to
simulate that without the need of any additional art, by
rendering only the silhouette of the character, blurring it (to
create a soft rounded edge), then using those soft edges as a
height map to create a normal map and finally adding the
diffuse Lambertian light component to the character borders.

This is an effective technique that creates depth in our


characters without much artistic effort. To take this to the next
step, we’d need some artist intervention. We can calculate a
normal map for a sprite from an illumination profile created by
an artist. We’ll see that in the next chapter.
Dynamic Lights using Normal Mapping
Even if the rim lights effect is a nice and cheap solution for
adding some dynamic lighting to our scenes, its asset-agnostic
nature makes it limited if we want to achieve great quality 2D
lighting. So, now we’re going to analyze a technique that
requires an artist to be involved to get it right, but the results are
outstanding.

This chapter is pretty simple conceptually, but it requires some


math background to fully get it. So, if you’re a bit behind on the
linear algebra front, I suggest you refresh dot product a little bit.

We’re also going to get rid of our custom lights and make use
of the existing Unity lights, this will enable our sprites to be
suitable to be used in a 3d context as well and use other Unity
features.

Asset Requirement

As mentioned before, this technique requires an artist creating


some specific assets for us.

We’re going to need the regular sprite, and 4 images that


represent the illumination of that sprite as if it was illuminated
by lights from the four directions: top, right, bottom, left. Let’s
see an example.
Once we have these four images, we can use them to
generate a normal map. Let’s see how.

Painting these images is not something artists are used to, so


be sure to help them figure out how to do it. The approach I use
and that always worked is asking them to set up a black layer
with Additive blend mode (called Linear Dodge in Photoshop)
and paint the lights on top of the sprite as if they were creating
the illumination for the asset. This is much closer to the way they
usually work when illustrating and the final result will look great
(if you have great artists, which you should!)

Math derivation

Our goal is to generate a normal map, given that we have 4


images that represent the illumination of our sprite as if it was
illuminated from the four directions.

Let’s choose one image to do our analysis, the one that has a
light coming from the top.

We know that each pixel of this image represents the intensity


of the light in the surface of our sprite. We also know that this
intensity is calculated by the dot product of the normal and a
vector pointing towards our light, limited between 0 and 1.
If we consider N = (n_x, n_y, n_z) and L = (l_x, l_y, l_z) , by
expanding the dot product we get the component-wise version
of the intensity.

Now we also know we have a light coming from the top, so our
light is defined as:

(remember that our L vector points towards the light, that’s


why it has 1 in the y axis and 0 in the rest).

Now, replacing the light vector, we get that:

When we apply the saturation to the dot product we get:


This means that, when n_y is between 0 and 1

and, if you read it backward

But, I is what we have stored in the texture with the top light!

This means that the information we have encoded in the


image with the top light, is the y component of the normal
when the component is greater than 0 and lesser than 1 .

The same analysis can be done with the bottom light

From that we get that


And when we saturate it

So, when -1 < n_y < 0 `

This means that the information we have encoded in the


bottom light image is n_y when n_y is lesser than 0 but greater
than -1 .

Then, when n_y is positive, we can use the result of the top
image, and when n_y is negative we can use the image from the
bottom image. Because of this we can conclude that

The same derivation can be done for right and left to conclude
that

Now, we’re only missing n_z to get our normal.


But, we know that, because the normal has norm one, from
the dot product property

Then,

If we rearrange this to isolate n_z

and, by taking the square root we get

but we can safely ignore the negative root so that


And that’s it, we calculated our normal, to summarize:

Now, let’s visualize this normal map in a shader.

Generating the Normal Map in a Shader

Let’s take a look at the shader code necessary to create this


normal map in the screen. We’re not going to use this in the
end, because we want to be able to create these normals during
development, so we’re not calculating the normals again and
again in runtime.

First, make a copy of the Texture shader with alpha blending,


call it NormalMapFromLightSources.shader . Then create a new material
called SpriteWithNormalMapFromLightSources and assign the
NormalMapFromLightSources shader to it.

Now, in the shader we’ll add a property for each direction.

Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Top ("Top Light", 2D) = "white" {}
_Right ("Right Light", 2D) = "white" {}
_Bottom ("Bottom Light", 2D) = "white" {}
_Left ("Left Light", 2D) = "white" {}
}
Then, we’ll add the required variables inside the Cg code.

sampler2D _MainTex;
sampler2D _Top;
sampler2D _Right;
sampler2D _Bottom;
sampler2D _Left;

And modify the fragment shader to calculate the normal and


render it.

fixed4 frag (v2f i) : SV_Target


{
// sample the texture
fixed t = tex2D(_Top, i.uv).r;
fixed b = tex2D(_Bottom, i.uv).r;
fixed l = tex2D(_Left, i.uv).r;
fixed r = tex2D(_Right, i.uv).r;
fixed3 n = fixed3(r - l, t - b, 0);
n.z = sqrt(1 - n.x*n.x - n.y*n.y);
n = (n + 1) / 2.0;
return fixed4(n, 1);
}

If you followed the previous section successfully this code


shouldn’t be a mystery. We sample the top, bottom, left, and
right intensities, then we calculate n with the formulae we found
in the previous section.

Remember that since our normal map is going to be stored in


RGBA format, the range for each component should be [0..1] ,
but the normal’s component range is [-1..1] . Because of that,
we’re making n fit the [0..1] range by adding one and dividing
by two.

The final line outputs the normal color in full opacity.


Exercise 2: Normal Map Generation

Congratulations on making it to the last exercise on the book!


In this exercise, we’re going to create an editor extension that
we’ll use to create a texture encoding our normals, better known
as Normal Map.

Then we’ll use that normal map in combination with Unity’s


standard shader to fully dynamically illuminate our sprites.

The reason we’re doing this as an editor extension is that in


runtime we don’t need the directional lighted images, just the
normal map. Because of that, we can generate the normal map
offline and just add that to the build.

The way this script works is the following:

You’ll have to name the normal map with adding the direction
after a dash at the end, for example, character-top , character-bottom ,
character-left and character-right .

Once you have such images in Unity, select the four of them,
and then open the menu called 2D Shaders > Create Normals .

Go ahead and open Exercise 4: Normal Map and start playing with
it. The code you have to write should go in Editor/CreateNormal.cs .
Be sure to ask on Discord if you feel lost.

As a reference, the image should look like this one


Using Unity’s native lights

Up until now, we’ve been using our own lights, but now that
we have a regular normal map texture we can make use of
Unity’s lighting system.

Let’s create a new material called SpriteWithNormal . We’re going


to use Unity’s Standard shader, with Rendering Mode set to
Cutout (to honor the alpha) and the normal map set to our
normal map texture.

Create a regular Sprite in the Scene window and change its


material to the material you just created.

Now you’ll see that the sprite looks dark.


Go ahead and create a new light in the scene, a PointLight will
do it. Move the light around and see how it affects your sprite.

This is awesome, now you can use any type of light provided
by Unity to lit your character or other objects in the scene.
I think this is the ultimate normal mapping technique and the
one that yields the best results. Even if it requires more work
from the artists, the amount of control over the effect and how
cool it looks outweighs its cost in my opinion.

Conclusion

In this chapter we’ve gone through the math behind


derivation of a normal map from 4 images containing the
illumination in the four directions top, bottom, left and right.

We did some tests in a Shader to validate our algorithm and


then created a Unity extension to process the directional light
textures and create a normal map to then be used in
conjunction with Unity’s lights system.

There are other ways to generate normal maps, but I’ve found
that this is the most intuitive since artists are already used to
thinking about light sources, and their influence in their
illustrations. Thus, no artist would find challenging to create
such images, even if it’s a good amount of work, the visual
results are incredible.
Where to go now?
Congratulations! You now have a solid foundation that will
enable you to go on a much deeper learning career. Let’s figure
out what you can do next.

Continue with the other books in the series

This book series is designed to provide an introduction to 2D


Shader Development. I assume you already read Foundations,
the first book in the series, so you can now continue learning by
checking out the two remaining books.

Procedural Texture Manipulation

In this book, you’ll learn a few techniques that are used a lot in
computer graphics to manipulate our textures with code. You’ll
go from a simple sine wave movement to complex
combinations of textures animating other textures and crazy
stuff like that.

You’ll also learn about noise, you’ll use Perlin Noise to animate
sprites and create random-esque noise inside a shader.

Full-Screen Effects

This book is all about creating screenspace modifications.


Using the rendered screen as an input texture you can apply all
the stuff we learned in the series to the whole screen, and that’s
what you’ll do in this book. You’ll figure out how Bloom works,
you’ll implement several effects like camera shake, retro-looking
filters like pixelating, and other useful things applying some of
the theory behind DSP (Digital Signal Processing).

The internet

The second obvious option is to search the internet for


examples of existing techniques you would like to learn and read
articles about how you can implement them.

Reach out other developers that have done things you are
excited about and ask them how they did it. This could be a
major source of learning material!

Books

I can’t recommend any books that are specific about 2D


(That’s the reason why I’m writing this!!!) but if you think you’re
ready to transfer the knowledge from 3D to 2D be sure to check
an up to date list in the (website for the book)
[https://www.2dshaders.com/what-to-do-now].
Acknowledgements
This book is the result of a long journey that included a lot of
people, and I’ll do my best to include them here, but I may leave
some people out. Sorry if I did.

First of all, I want to thank my girlfriend and eternal partner


Aldi, who’s always there for me no matter what crazy idea I have
plans for. Thanks for all the support, I love you so much. Thanks
to the rest of my family too, my mom Elena, my siblings and
their couples who also helped in different ways. Jorge and Gaby
for always being there for us too.

I want to thank everyone involved in the game development


community from Argentina, specially my great friends from
Nastycloud and Bigfoot Gaming.

Also David Roguin, Agustin Cordes from Senscape and Daniel


Benmergui, all the crew from ADVA who put countless amounts
of time in growing our local industry.

Thanks to my dear friends Michael de la Maza, Diane Hsiung


and Julian Nadel for being of immense help and support during
so many years.

Last but not least, all the amazing developers that helped
review the book in its early stages: ….

Thank you so much to all of you, I’m eternally grateful for your
time investment in making this book better!
Credits
The amazing cover design and Hidden People Club logo were
created by German Sanchez from Bigfoot Gaming. I can’t be
more grateful for having you on board, man.

The scarecrow character was created by the talented Aylen


Silva. You can find her stuff on instagram:
https://www.instagram.com/lenkruspe/ or behance:
https://www.behance.net/len_silva3828
Acknowledgements
This book is the result of a long journey that included a lot of
people, and I’ll do my best to include them here, but I may leave
some people out. Sorry if I did.

First of all, I want to thank my girlfriend and eternal partner


Aldi, who’s always there for me no matter what crazy idea I have
plans for. Thanks for all the support, I love you so much. Thanks
to the rest of my family too, my mom Elena, my siblings and
their couples who also helped in different ways. Jorge and Gaby
for always being there for us too.

I want to thank everyone involved in the game development


community from Argentina, specially my great friends from
Nastycloud and Bigfoot Gaming.

Also David Roguin, Agustin Cordes from Senscape and Daniel


Benmergui, all the crew from ADVA who put countless amounts
of time in growing our local industry.

Thanks to my dear friends Michael de la Maza, Diane Hsiung


and Julian Nadel for being of immense help and support during
so many years.

Last but not least, all the amazing developers that helped
review the book in its early stages: Jacob Salverda
(http://www.salvadorastudios.com), Mauricio J. Perez
(http://www.randomgames.com.ar)

Thank you so much to all of you, I’m eternally grateful for your
time investment in making this book better! ## Credits

The amazing cover design and Hidden People Club logo were
created by German Sanchez from Bigfoot Gaming. I can’t be
more grateful for having you on board, man.

The scarecrow character was created by the talented Aylen


Silva. You can find her stuff on instagram:
https://www.instagram.com/lenkruspe/ or behance:
https://www.behance.net/len_silva3828
Exercise 1 Solution
The exercise is asking us to create two static lights and two
static shadows with specific colors. In the source code, we can
see that we have a scene set up with four GameObjects, two for
the shadows and two for the lights, with their corresponding
image and materials already created. We can also find two
shaders, Light and Shadow in the Shaders folder. Our goal is to make
those two shaders work as expected, casting shadows or
illuminating with the given color.

Lights

Let’s start by working on the lights. From the Static


Illumination chapter, we know that, in order to illuminate, we
need to make our shaders use Additive blending mode. So let’s
begin by doing that. If you remember, what you’ll need for this is
to set the Blend command in the shader.

Blend One One // Additive

And now, we have lights. Hit play and see the character
getting illuminated when it moves behind the lights. The next
step is to make this light apply a color and intensity to this light.
Let’s create a Color and Intensity properties in the shader.

Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Color ("Color", Color) = (0,0,0,1)
_Intensity ("Intensity", Range(0,1)) = 0.5
}
Now, let’s add the properties for this in the Cg code.

fixed4 _Color;
float _Intensity;

In order to tint this light, we need to multiply the texel we’re


returning in the fragment shader by the _Color parameter. Let’s
do that.

fixed4 frag (v2f i) : SV_Target


{
fixed4 col = tex2D(_MainTex, i.uv);
return col * _Color;
}

Now, in order to try this, we’ll have to set the colors in the
corresponding materials. The WhiteLight should have a white
color, and the GreenLight should have a green color. Go ahead and
set those.
When you do that, you’ll see that the green light turns green.
Yay!
Now, if you hit play, you’ll see the character being illuminated
by the lights. Awesome.

Finally, let’s add the intensity factor. The simplest way to do


this is to multiply the intensity by the light color. Since the
intensity goes from 0 to 1 it will reduce the color intensity
(approaching it to zero) if it is less than 1 or leave the color at full
intensity if it is 1 .

return col * (_Color * _Intensity);

Go ahead, hit play and change the intensity factor to see how
the light is affected by it.

Shadows

Now it’s time to get the shadows working too. For the
shadows to work we need to use a multiply blending mode in
the Shadow shader.

Blend DstColor Zero // Multiply

Now as you can see, the shadows will darken the character
when it is behind, but they don’t look right.
The reason is that these images have a gradient that goes
from black in the edges to white in the middle. So the first step
is to make those images look as they’re supposed to. For this,
we’re going to invert it.

fixed4 col = 1 - tex2D(_MainTex, i.uv);

By subtracting it from 1 (white), we fixed the problem. We


have a shadow working.

Now we want to set the color and intensity. So let’s add the
necessary shader properties again

Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Color ("Color", Color) = (0,0,0,1)
_Intensity ("Intensity", Range(0,1)) = 0.5
}
And let’s add the required variables in Cg.

fixed4 _Color;
float _Intensity;

If you follow the previous example, you’d then multiply by the


color.

fixed4 col = 1 - tex2D(_MainTex, i.uv) * _Color

While that works fine in the previous case, look what you get
when making BlueShadow use a blue color in the material.

The reason is that we’re inverting the color, so our blue


shadow doesn’t look blue anymore.

Let’s invert the color before multiplying so that when we invert


the whole shadow, we’ll get the right color.
fixed4 col = 1 - tex2D(_MainTex, i.uv) * (1 - _Color);

Now, if you have the colors set correctly in both materials (blue
with blue, and black with black), you should see the shadows
correctly.

Let’s add the intensity in the shadow as well by multiplying


intensity by the final color.

fixed4 col = 1 - tex2D(_MainTex, i.uv) * ((1 - _Color) * _Intensity);

And that’s it, we have our shadows with coloring and intensity
ready.

Conclusion

With these two shaders, you can start adding static


illumination to your scenes. With only this effect you can go
miles when combining with custom-drawn lights and shadows.
Be sure to include an artist working with this technique and
you’ll get amazing results. You can even combine this with
particles! Go ahead an experiment.

I’d love to see the stuff you come up with, please share
screenshots of your game using these techniques in the Discord
channel.
Exercise 2 Solution
Now, we’re going to add some more dynamism to our lighting
system with Directional Lights. The idea for these lights is that
they include an angle that is passed to the shader to that the
gradient gets rotated.

This is useful if we want to provide an ambient light that


comes from a specific direction instead of top to bottom. For
this, we’re going to rotate our uv vector by the angle defined in
the light.

Even if in the exercise I give you some code structure, I’ll


explain the exercise from scratch, so that you also understand
why I added the new code I gave you.

Using the TextureWithDirectionalLight shader in our


character

We’ll proceed the same way we did with the


TextureWithAmbientLight shader. We’ll create a new script called
SpriteDirectionalLight.cs and proceed adding the same code we
used in SpriteAmbientLight.cs .

We’ll also add an Angle using an editor directive to represent it


as a Range from 0 to 360 .

public class SpriteDirectionalLight : MonoBehaviour


{
public Color LightColor = Color.white;
public Color ShadowColor = Color.black;
[Range(0,360)]
public float Angle = 0;
When we collide, we want to search for a Directional light now,
and set the required values, including the new Angle, grabbed
from the Lights.Directional object we just collided with.

private void OnTriggerEnter2D(Collider2D collision)


{
Lights.Directional directionalLight = collision.gameObject.GetComponent<Lights.Directional

if(directionalLight != null)
{
LightColor = directionalLight.LightColor;
ShadowColor = directionalLight.ShadowColor;
Angle = directionalLight.Angle;
}
}

And to finish the script, when we run our Update callback, we


update the _Angle property of the shader besides the properties
we were already updating.

private void Update()


{
material.SetColor("_LightColor", LightColor);
material.SetColor("_ShadowColor", ShadowColor);
material.SetFloat("_Angle", Angle * Mathf.PI / 180.0f); // Cast to Radians
spriteRenderer.material = material;
}

That’s it for the script. Now create the required material and a
shader called TextureWithDirectionalLight .

Adding an angle to the shader

The shader is pretty much the same as the


TextureWithAmbientLight shader, with small (but critical) differences.

In order to rotate the uv we’ll have to add an angle to the


shader. We do this by using a Property.
Properties
{
_MainTex ( "Main Texture", 2D ) = "white" {}
_LightColor ( "Light Color", Color ) = (1,1,1,1)
_ShadowColor ( "Shadow Color", Color ) = (0,0,0,1)
_Angle ("Rotation Angle", Range(0,360)) = 0}

And we also need to reflect that inside the CGPROGRAM .

sampler2D _MainTex;float4 _LightColor;


float4 _ShadowColor;
float _Angle;

That’s it, we now have the data we need to perform our


rotation.

2D Vector Rotation

As mentioned before, we want to rotate the uv vector. We


know that uv is a float2 vector, so we’ll use the 2D vector
rotation.

This is achieved by multiplying the vector by a 2x2 affine


transform matrix. Take a look at the Vector Rotation Appendix to
learn more about the math behind this.
We’re going to perform the rotation in the vertex shader, so
that it is performed once per vertex and then it gets
interpolated, instead of doing it in the fragment shader, once
per fragment. This is slightly more efficient and yields the same
result.

To pass the value to the fragment shader, we want to add a


new field to our v2f struct. We’ll call the field direction . (duh!)

Since we must provide a semantic for this field, we’re going to


tell Unity that this is a second pair of UV values using the
TEXCOORD1 semantic.

struct v2f
{
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
float2 direction : TEXCOORD1;
};

To fill that value, we need to create the rotation matrix in the


vertex shader and multiply it with the uv .

v2f vert (appdata v)


{
v2f o;
o.position = UnityObjectToClipPos(v.position);
o.uv = v.uv;

float2x2 rot = {
cos(_Angle), -sin(_Angle),
sin(_Angle), cos(_Angle)
};
o.direction = mul(rot, v.uv);
return o;
}

Now, there is a slight issue here. If we were to use the rotation


matrix as is, the pivot for that rotation would be the system’s
origin (0,0) .

but we want to rotate the UV using the uv space center as the


pivot.

Because of this, we have to subtract the uv center (0.5, 0.5) ,


then apply the rotation, and adding it again. In this way, we
change the pivot of the rotation to be the center of our UV
bounds.

o.direction = mul(rot, v.uv - 0.5) + 0.5;

With these corrections, we’re now rotating the uv with the


pivot in (0.5, 0.5) .
Now, in the fragment shader, instead of interpolating using
i.uv.y we’ll use i.direction.y .

fixed4 frag (v2f i) : SV_Target


{
fixed4 col = tex2D(_MainTex, i.uv);
return col * lerp(_ShadowColor, _LightColor, i.direction.y);
}

That’s it, now we can set an angle and change the direction in
which the light hits the character.
Exercise 3 Solution
We’re now in the last step of this technique, the actual
lighting. We’ll use the Lambertian diffuse light reflection model
in this example. If you’re not familiar with this (which I assume
most of the readers won’t!), I added the appendix Lambertian
Diffuse Shading that discusses how diffuse lighting is usually
modeled in games, along with several references to read if you
want to learn more about that topic.

The RimLights Shader

Let’s analyze the changes we need to do to our texture shader


to use our normal map to generate rim lights.

First of all, we’ll need to pass three variables to the shader, the
Normal Map, and the attributes for the light: its color and
position.

Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Normals ("Texture", 2D) = "white" {}
_LightColor("LightColor", Color) = (0,0,0,0)
_LightPosition("LightPosition", Vector) = (0,0,0,0)
}

And we need to add the respective variables in Cg.

sampler2D _MainTex;
float4 MainTexTexelSize;
sampler2D _Normals;
float3 _LightPosition;
float4 _LightColor;
For the lighting calculations, we’re going to need to pass the
world position of the vertex to the fragment shader and
interpolate it, so we’re going to add that value to v2f . If you don’t
do this and use position instead you’ll be using the clip-space
representation of the vertex, and your distance calculations
won’t be correct. Refer to the Coordinate Systems appending to
see why.

struct v2f
{
float2 uv : TEXCOORD0;
float4 position : SV_POSITION;
float3 worldPos: TEXCOORD1;
};

We’ll use the TEXCOORD1 semantic for this, don’t overthink it, we
need to give a semantic so that it can be passed to the fragment
shader, and since we’re not using TEXCOORD1 (a semantic used to
pass a second set of uvs) we’ll just use that.

Now, let’s see the vert function:

v2f vert (appdata v, out float4 outpos : SV_POSITION )


{
v2f o;
outpos = UnityObjectToClipPos(v.vertex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex);
o.uv = v.uv;
return o;
}

The only difference between this and the Texture shader is


that we’re adding the code to pass the position of the vertex in
world coordinates. If you are not familiar with Object, World,
View and Clip spaces, check the Coordinate Systems appendix.

What I’ll say here is that we’re multiplying Unity’s


unity_ObjectToWorld matrix with the vertex position. The vertex is in
Object space (or Model space, too), so by multiplying it with the
ObjectToWorld matrix we’re transforming the position to world
space and storing it in v2f to be used by the fragment shader to
calculate the lighting.

The frag method is where the magic happens, let’s take a look.

fixed4 frag (v2f i) : SV_Target


{
float3 l = _LightPosition - i.worldPos;
float3 n = tex2D(_Normals, i.position.xy / _ScreenParams.xy) * 2 - 1;
float nDotL = saturate(dot(normalize(l), normalize(n)));
float dist = length(l);
float4 col = _LightColor * nDotL / (dist * dist);
return col;
}

We’re using a Lambertian illumination model to simulate


diffuse lighting.

First, we calculate the light direction by subtracting


_LightPosition from the fragment’s world position.

We’re getting the normal from the _Normals texture we created


before. Remember that the normal is encoded in the range of
[0..1] but we need it in [-1..1] so we’re multiplying by two and
removing one from it to put it in range.

Remember the geometric interpretation of the dot product


between two vectors is ` dot(a,b) = length(a)length(b)cos(θ)
where θ is the angle between a and b . We normalize l to make its
length be equal to 1, we also normalize the normal (even if it sounds
redundant), because we don’t know if the map sent to the shader has
normals already of length 1`.

By doing that, when we calculate the dot product of both


vectors, we end up having both lengths to be 1, and the effective
result is the cosine of the angle between the light and the
normal.

This cosine is going to be 0 when the light is perpendicular to


the normal and one when the light is parallel to it, and it will go
from 0 to 1 in the middle. Because of this, we can use it as an
attenuation factor when multiplying it with the light color.

So the final color for is calculated by multiplying _LightColor


with nDotL (the cosine of the angle between n and l ), and then
dividing by the squared distance to the light. This also works as
an attenuation to the light color but related to the distance to it
instead of the direction of the light beam.

This is it for the shader, the result of this shader is a buffer that
has black where no shapes are present and colored lights in the
shape’s borders. We’re going to add this image to the rendered
screen next, so that we effectively have amazing lights, but first
we need to make some changes to our C# script.

Final touches to ScreenSpaceRimLights.cs

We just finished creating our shader, now we need to grab the


Normal map created before, and use it to generate lighting on
top of our rendered scene, so let’s do that.

We’re going to use an enum to define a state machine for our


rendering. We’ll have the first state that creates the normal map,
a second state that creates a texture with the lights and a third
step to mix the lights with the rendered scene.

This could be done in fewer steps, but I decided to keep it this


way to illustrate the process.

public class ScreenSpaceRimLights : MonoBehaviour


{
enum Pass
{
Normals,
Lights,
Add
}

You also want to be sure that you have all the necessary
variables that we’ll use in this script

[Header("Normal Generation")]
public Shader TextureWithAlphaOnly;
public Material Blur;
[Range(1,5)]
public int BlurPasses = 3;
public Material ToNormal;

[Header("Rim Lights")]
public Shader RimLights;
public Lights.Point PointLight;

[Header("Blending")]
public Material Additive;
new Camera camera;
public RenderTexture Normals;
public RenderTexture Lights;
RenderTexture tmp;
RenderTexture tmp2;
Vector4 tmpVector = Vector4.zero;

Pass CurrentPass;

You should be familiar with most of these, we’re introducing a


new variable for our RimLights shader, a point light reference, an
additive Material to blend the lights, a new RenderTexture for the
lights themselves, and a variable of type Pass to hold the current
rendering pass we’re doing.

You should assign the public variables in Unity’s editor with


their corresponding assets.

We also have to add the initialization for the Lights Render


Texture in Awake .

Lights = new RenderTexture(camera.pixelWidth, camera.pixelHeight, 24);

Now we need to do some changes in the Update callback.

private void Update()


{
camera.SetReplacementShader(TextureWithAlphaOnly, null);
camera.targetTexture = Normals;
CurrentPass = Pass.Normals;
camera.Render();
camera.SetReplacementShader(RimLights, null);
Vector3 lightPos = PointLight.transform.position;
tmpVector.Set(lightPos.x, lightPos.y, lightPos.z, 0);
Shader.SetGlobalVector("_LightPosition", tmpVector);
Shader.SetGlobalVector("_LightColor", PointLight.LightColor);
Shader.SetGlobalTexture("_Normals", Normals);
camera.targetTexture = Lights;
CurrentPass = Pass.Lights;
camera.Render();
CurrentPass = Pass.Add;
camera.targetTexture = null;
camera.SetReplacementShader(null, null);
}

We’re already familiar with the first line, we’re setting


TextureWithAlphaOnly as the replacement shader for the whole
rendering as we did in the Rendering Alpha Only section of this
chapter. We then set target texture of the camera to be Normals .
In this way, the result of rendering the normals is going to be
stored in that RenderTexture for us to use in a later stage.

After that, we set CurrentPass to be Pass.Normals , this will be used


in OnRenderImage to select the way we want to process the
rendered image.

Finally, we call camera.Render() to force the camera to render.


This will effectively do a rendering pass on the current camera,
and call OnRenderImage after finishing.
The final result of these four lines of code should be the
normals rendered in the Normals render texture.

After that, we use another replacement shader. In this case,


we’re using RimLights, which will end up creating an image that
can be added to the regular render to include the lights.

We have to set all the properties required by RimLights to work,


so let’s take a look at this block of code.

Since we’re using a replacement shader, we don’t have an


intermediate material to set these properties, so we use
Shader.SetGlobal* methods to pass this data. First of all, we capture
the light position in world coordinates and pass it to the shader
using Shader.SetGlobalVector , since SetGlobalVector uses a Vector4 , we
add a zero to make the light position work with it (we’ll just
ignore it in the shader). We also set the color for the light and
pass the Normals texture.

By setting the camera target texture to Lights this time, we’re


rendering the result of the light calculations to another
RenderTexture to be used in the next pass to add those lights to
the regularly rendered image.

We update CurrentPass to reflect that we’re doing a lighting


pass and force another rendering. After all this, we end up
having the light profile in the Lights texture.

Finally, we’re updating CurrentPass again to reflect that we’re in


the Add pass. Since we want to render this final result to the
screen, we set camera.targetTexture to null , and also reset the
replacement shader.

And finally, let’s take a look at how OnRenderImage should look


like.
private void OnRenderImage(RenderTexture source, RenderTexture destination)
{
switch (CurrentPass)
{
case Pass.Normals:
Graphics.Blit(source, tmp);
for (int i = 0; i < BlurPasses; i++)
{
Graphics.Blit(tmp, tmp2, Blur, 0);
Graphics.Blit(tmp2, tmp, Blur, 1);
}
Graphics.Blit(tmp, destination, ToNormal);
break;
case Pass.Lights:
Graphics.Blit(source, destination);
break;
case Pass.Add:
Graphics.Blit(source, tmp);
Graphics.Blit(Lights, tmp, Additive);
Graphics.Blit(tmp, destination);
break;
}
}

We need different behaviors for the different passes. That’s


why we use a switch statement to select which behavior we want
to use in each pass.

When we’re creating the Normals, we want to apply the blur


passes and then run the result through the shader that created
the normals from the blurred silhouettes.

For the lights pass, we don’t need anything specific, the


replacement shader is taking care of everything.

Finally, for the additive pass, we render the screen regularly to


the tmp RenderTexture , then we add the lights using the Additive
shader and render the result to the destination .

Again, yes, this could be done in fewer steps, but for the sake
of clarity, I decided to do this in clear steps. Go ahead and try to
optimize this if you’d like to. Jump into the forums to show the
results of your experiments, I’d love to see what you come up
with!

Well, that was quite a journey, if you hit play now, and get your
character close to a light, you’ll see how the rim lights get
created.

If you want to debug and understand better how this works, I


suggest you render each step to the screen, instead of a render
texture, it’s good for debugging and tweaking.

Here’s a quick graphic summary of the process.


Exercise 4 Solution
Now that you developed the intuition on how we generate the
normal maps from the 4 directional light images, we’re going to
create an Editor extension that outputs the normals into a file.

If you try running the script right from the start you’ll get an
error.

Since we’re going to be dealing with the image’s pixels, we


need to set the 4 directional images as Read/Write Enabled. For
that, we’ll select all four of them in the Project tab and in the
Inspector you’re going to expand Advanced and click Read/Write
Enabled
Next, we’re going to create a folder called Editor and create a
script called CreateNormal.cs in it. Open the script.

You want to add the namespace UnityEditor in it.

using UnityEditor;

And change the class to inherit from Editor instead of


MonoBehaviour .

public class CreateNormal : Editor


{

Now we’re going to create a method that will allow us to select


the images with the directional lights in the project folder and
then create a normal map using them.

[MenuItem("2D Shaders/Create Normals")]


public static void CreateNormals()
{
}

This will create a menu in the Unity Editor that will call the
CreateNormals method.

Inside that method we’ll grab the selected images in the


Project tab and assign each of them according to its name, we’ll
expect the top image to end with top , the bottom image with
bottom , and so on.

Texture2D top = null;


Texture2D bottom = null;
Texture2D left = null;
Texture2D right = null;
foreach( Object obj in Selection.objects)
{
if (obj.name.IndexOf("top") > 0) { top = (Texture2D)obj; }
if (obj.name.IndexOf("bottom") > 0) { bottom = (Texture2D)obj; }
if (obj.name.IndexOf("left") > 0) { left = (Texture2D)obj; }
if (obj.name.IndexOf("right") > 0) { right = (Texture2D)obj; }
}

if (top == null || bottom == null || left == null || right == null )


{
Debug.LogError("Missing directional images.");
}

return;
}

Using the Selection.objects we find the images that contain the


top, bottom, left and right names. Yes, this is potentially
problematic, for example an image called right_handed_character-
top.png would match the right rule, but you’re a clever
programmer and know how to deal with these things, and it’s
also way out of the book’s scope to solve these issues, so I’m
adding the most rudimentary possible version of this, you can
then improve this by creating an Editor Window or something
more robust.

Just in case we forgot to select some images (or the images


are named incorrectly), we’re throwing an error and returning if
any of the images are missing.

The next step is to create the normal itself.

Texture2D normals = new Texture2D(top.width, top.height);

for (int x = 0; x < top.width; ++x)


{
for (int y = 0; y < top.height; ++y)
{
float t = top.GetPixel(x, y).r;
float b = bottom.GetPixel(x, y).r;
float l = left.GetPixel(x, y).r;
float r = right.GetPixel(x, y).r;

Vector3 n = new Vector3(r - l, t - b, 0);


n.z = Mathf.Sqrt(1 - n.x * n.x - n.y * n.y);
n.x = (n.x + 1) * 0.5f;
n.y = (n.y + 1) * 0.5f;
n.z = (n.z + 1) * 0.5f;

normals.SetPixel(x, y, new Color(n.x, n.y, n.z, 1.0f));


}
}

In this code, we’re creating a normal texture with the same


dimensions as the top image (we assume all images have the
same size!) and iterate through the width and height of the top
image too.

For each (x,y) pair, we grab the corresponding pixel from each
of the four textures and then create the normal vector as we did
in the shader.

Now it’s just a matter of saving the texture to a file.

string normal_file_name = top.name.Split('-')[0] + "-normals";


string path = System.IO.Path.GetDirectoryName(AssetDatabase.GetAssetPath(top));
string filename = Application.dataPath + ".." + path + "/" + normal_file_name + ".png";
System.IO.File.WriteAllBytes(filename, normals.EncodeToPNG());
Debug.Log("Created normal map at path: " + filename);

I’m not going to say much about this, I’m just calculating the
path where the images were selected (using the top image
path) and then saving the normal to that path.

The naming of the normals file asumes that you only use
dashes for the direction name, if you use dashes in the name of
the file it will break (name files “my_great_asset-right.png”
instead of “my-great-asset-right.png”).

Now select the directional light textures and go to the 2D


Shaders/Create Normals menu.
If everything went well, you should now have the normals in a
file inside the same folder of the directional light textures. You
may not see it in the Project tab because Unity doesn’t refresh it
automatically. Go ahead and refresh it to see the normals
showing up.
Appendix I: Vector Rotation
In this appendix, we’re going to go through the basics of
vector rotation. Since we only want to rotate 2D vectors in this
book, we’ll only analyze that case. 3D vector rotation is a
somewhat more complex topic and there is abundant literature
out there, so we’re sticking to what we need here. First, we’ll take
a look at how to rotate a vector using the origin as the pivot for
the rotation.

Rotating with the origin as the pivot

Given that we have a vector v = (x,y) , we can define that


vector as ` x = r * cos(α) and y = r * sin(α) , where r is the norm of
the vector and α` is the original angle.

We want to rotate the vector to a new position, by an angle β.

The angle of the resulting vector v_r is going to be α + β.

We can now use the trigonometric rule of the sums to expand


the cos and sin .
Now, if we apply the rule in v_r we get that

But we also know that r * cos(α) is x and r * sin(α) is y , so


replacing those we get that

Now, we can also express this vector using matrix


multiplication.

And this matrix is what we use in our Directional Lights shader


to rotate the uv.
Rotating with an arbitrary pivot

As mentioned before, the rotation matrix uses the center as


the rotation pivot. This may be problematic in some cases (like in
our Directional Lights shader).

The solution to this problem is one of those cases where you


take a problem you don’t know how to solve and transform it in
one you already know the solution for.

The problem we know the solution for is rotating using the


origin as the pivot, so in this case, what we have to do is
transforming our arbitrary pivot to become the center, and we
achieve that by subtracting it from our original vector,
performing the rotation, and then adding it again.

In the shader, we have to use this because we want to rotate


our light using the center of the uv square as the pivot, not the
origin.

Rotation matrix in Cg

I wanted to leave the 2D rotation matrix handy to you for the


exercise. Here you are:

float2x2 rot = {
cos(_Angle), -sin(_Angle),
sin(_Angle), cos(_Angle)
};
Appending II: Normal Mapping
In this appendix, we’ll discuss what a normal vector is in the
context of computer graphics and why we need them.

We’ll also discuss the Normal Mapping technique used to


store such vectors in textures to provide greater control on the
normals.

In geometry, we define a vector that is normal to a surface to


be perpendicular to this surface.

In the context of computer graphics, we use normals to


calculate light intensities. You’ll learn more about this in the
appendix on Lambertian Diffuse Lighting.

Because of the ways we have to send data to the GPU, we can


assign normals to each vertex of our geometry. In the case of a
quad, we can send 4 normals and get them interpolated by the
graphics card and used to calculate lighting.

If we send the same normal for 4 points we get a solid lighting


in the quad.

We can modify these normals to point in a different direction,


making the equation believe that the surface is not flat.

But using the interpolation provided by the GPU is limited


because we don’t have control over what happens with the
lights between two vertices.

What we can do is to create a texture that is sent to the shader


as well, called Normal Map, that stores the normal at the surface
at a given point. We can sample this texture in the fragment
shader and apply illumination with more control.

Since normals are unit vectors, their component values go


from -1 to 1 . But we store RGBA images from 0 to 1 .

Because of this, we have to rescale the normals to that range.


We achieve that by adding one and dividing by two when we
encode the normals, and multiply by two and subtract one
when decoding it.
The topic of normals is more subtle and deep, but I wanted to
give you an intuition on them and why we have to deal with
normals in this book at all.

When you go through the Lambertian Diffuse Lighting


appendix you’ll get the full picture on why we need them and
how to use them.
Appendix III: Coordinate Systems
During rendering, when we talk about a vertex, we have to
express its components in relation to the context we’re in.

For example, when creating a model, say a quad, we want to


express vertices in relation to the model’s pivot point, the
model’s (0,0,0) . The top left vertex of this model could be
expressed as (-1, 1,0) .

This is what we call Model Space. Usually, when we pass a


vertex to the GPU as part of a mesh, it is expressed in model
space.

When we put that model in the context of a game world, we


then have another reference point, the world’s (0,0,0) . For
example, if we put our previous quad in the (0,1,0) position, then
the top left vertex we were talking about becomes (-1, 2, 0) .
This is what we call World Space.

But we navigate our games not using the game world itself,
but cameras. In the context of computer graphics, is not that
easy to render scenes by moving a camera around the world,
instead, we move the whole world so that the camera sits in
(0,0,0) . We achieve this by moving the vertices to another space
called View Space.

If my camera sits in the world’s (0, 0, -10) , then the top left
vertex of the quad becomes (-1, 2, 10) in View Space.

To finish this Space hopping, we want to display our model on


a flat screen. Because of this, we project the scene on a plane,
using orthographic or perspective projection according to your
needs. When we project a vector into a plane, we transform it
into Projection Space. Another name for Projection Space is Clip
Space. This is what Unity uses.

Transforming a vertex between spaces

In order to transform a vertex between spaces, you’ll need to


use an affine transform in the form of a matrix that you multiply
by that vertex.

Every time we use UnityObjectToClipPos in Unity, what we’re


doing is we’re moving our vertex from Model Space to Clip
Space by multiplying the vertex by what is usually called an MVP
(Model View Projection) matrix. This is a standard practice in
computer graphics.

o.position = UnityObjectToClipPos(v.position);

Unity provides us these matrices in the form of built-in


variables in our shaders, check
https://docs.unity3d.com/Manual/SL-UnityShaderVariables.html
to find out about these matrices.
In the ScreenSpace Automatic Rim Lights exercise, we’re
using one of these matrices because we need to do a dot
product between a light’s position (in world space) and a vector
position (passed to the shader in model space). For this dot
product to make sense we need both vectors to be in the same
space, because of that we use unity_ObjectToWorld matrix
multiplied by the vertex to transform it into world space.

o.worldPos = mul(unity_ObjectToWorld, v.vertex);

You can search the web to find how these matrices are
constructed or look into the book’s website “Where to go now”
section to find out more about this topic.
Appendix IV: Lambertian Diffuse Shading
There is a lot of information around about this topic, it’s just a
matter of doing a google search. So I’ll just explain the intuition
without all the formal math, You can find a detailed explanation
of this topic here: https://www.scratchapixel.com/lessons/3d-
basic-rendering/introduction-to-shading/diffuse-lambertian-
shading.

Consider this situation, when we have our surface (a quad in


our case), its normal and a light beam coming from a given
direction.

We want to calculate the amount of light that is reflected by


the surface, to then be added to the base color of the surface to
lighten it up.

Lambert’s Cosine Law in optics says that this intensity is


proportional to the cosine of the angle between the light and
the normal.

If you analyze this numerically, you’ll notice that when the light
and the normal are perpendicular, we get that cos(90) = 0 , this
means that when the light and the normal are perpendicular
(thus the light and the surface are parallel) the surface does not
reflect any light. Which makes sense, right?
The other extreme is when the light is perpendicular to the
surface, pointing straight to it. In this case, the angle between
the normal and the light is 0 , and cos(0) = 1 . This means that
when the light is pointing at the surface, all beams are fully
reflected.
Whatever angle between 0 and 90 will give values in between.

Now, mathematically, we achieve this by using a property of


the dot product. We know that the dot product between two
vectors A and B , is the product of the norms of both vectors by
the cosine of the angle between A and B . How convenient.

We can normalize the vector that points to our light and our
normal vector so that they have norm 1 . Then, what we end up
by calculating the dot product of these two is the cosine of the
angle.
Then, by mixing both formulas, we got that

Because of this, we can then use the dot product of our


normal and the vector pointing to the light as an attenuation
factor (intensity) for our light color when calculating the
incidence of a given light in our model.

We call Lambertian Shading to the light model that uses


Lambert’s cosine law to calculate the incidence of a light given
its direction and the surface normal.

In Cg we express this with the following code

I = saturate(dot(normalize(l), normalize(n)))

We use saturation because we don’t want negative or too big


intensities, only between 0 and 1 .
Appendix V: Convolution
Convolution is a math operation that is widely used as a way to
filter signals. This has implications in several fields like digital
image processing, sound processing, acoustics, physics, and
others.

Wikipedia defines that the convolution between two functions


f and g equals the integral of the product of the two functions
after one is reversed and shifted.

There is also a discrete version of this definition. Instead of


integrating we do the discrete version: a sum.

You may not immediately see what the heck this means in
terms of what we’re using it for, but bear with me.

In Digital Image Processing, a field that is part of Digital Signal


Processing, we use convolution between images and what is
called as convolution kernels (or filters), to achieve several
different effects.

You can think of an image as a matrix that stores the results of


a function of x and y.

For example, an image that has a red pixel in the (0,0)


coordinate can represent a function f that has f(0,0) = red .

You can now see that we can use the pixel values of an image
as part of the convolution equation. Then we use a convolution
kernel, defined as a matrix of values, that also encodes the
results of a function at given points.

Now that we have both functions defined, we can convolute


them and get a third image that is the first image processed by
the convolution kernel.

There are a lot of effects that you can do with this, from edge
detection, low/high pass filters, etc.

We use convolution in the book to blur an image. If you define


what’s called a Gaussian Kernel, as we did in the Rim Lights
chapter, the effect of convolution is to output the sum of the
weighted average of a pixel and its surroundings. This effectively
blurs the image.
This same effect can be used in audio processing to create
reverberation, for example.

You might also like