You are on page 1of 209

OGRE Manual v1.

7 (’Cthugha’)

Steve Streeting
Copyright
c Torus Knot Software Ltd

Permission is granted to make and distribute verbatim copies of this manual provided the
copyright notice and this permission notice are preserved on all copies.

Permission is granted to copy and distribute modified versions of this manual under the con-
ditions for verbatim copying, provided that the entire resulting derived work is distributed
under the terms of a permission notice identical to this one.
i

Table of Contents

OGRE Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 Object Orientation - more than just a buzzword . . . . . . . . . . . . . . . . 2
1.2 Multi-everything . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 The Core Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5


2.1 The Root object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 The RenderSystem object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 The SceneManager object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 The ResourceGroupManager Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 The Mesh Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.7 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.8 Overlays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Material Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.2 Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.3 Texture Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.1.4 Declaring Vertex/Geometry/Fragment Programs . . . . . . . . . . 63
3.1.5 Cg programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.1.6 DirectX9 HLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.1.7 OpenGL GLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.1.8 Unified High-level Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.1.9 Using Vertex/Geometry/Fragment Programs in a Pass . . . . 79
3.1.10 Vertex Texture Fetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.1.11 Script Inheritence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.1.12 Texture Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.1.13 Script Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.1.14 Script Import Directive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.2 Compositor Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.2.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.2.2 Target Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.2.3 Compositor Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.2.4 Applying a Compositor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.3 Particle Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.3.1 Particle System Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
3.3.2 Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.3.3 Particle Emitter Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.3.4 Standard Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
ii

3.3.5 Particle Affectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137


3.3.6 Standard Particle Affectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
3.4 Overlay Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
3.4.1 OverlayElement Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.4.2 Standard OverlayElements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
3.5 Font Definition Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

4 Mesh Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158


4.1 Exporters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.2 XmlConverter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4.3 MeshUpgrader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

5 Hardware Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160


5.1 The Hardware Buffer Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.2 Buffer Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.3 Shadow Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5.4 Locking buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.5 Practical Buffer Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.6 Hardware Vertex Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.6.1 The VertexData class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.6.2 Vertex Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.6.3 Vertex Buffer Bindings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.6.4 Updating Vertex Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.7 Hardware Index Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.7.1 The IndexData class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.7.2 Updating Index Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.8 Hardware Pixel Buffers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.8.1 Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.8.2 Updating Pixel Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.8.3 Texture Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.8.4 Pixel Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.8.5 Pixel boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

6 External Texture Sources . . . . . . . . . . . . . . . . . . . . 176

7 Shadows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.1 Stencil Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.2 Texture-based Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.3 Modulative Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
7.4 Additive Light Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
iii

8 Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.1 Skeletal Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.2 Animation State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.3 Vertex Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.3.1 Morph Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.3.2 Pose Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.3.3 Combining Skeletal and Vertex Animation . . . . . . . . . . . . . . . 202
8.4 SceneNode Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
8.5 Numeric Value Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
OGRE Manual 1

OGRE Manual
Copyright
c The OGRE Team

This work is licenced under the Creative Commons Attribution-ShareAlike 2.5 License.
To view a copy of this licence, visit http://creativecommons.org/licenses/by-sa/2.5/
or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305,
USA.
Chapter 1: Introduction 2

1 Introduction

This chapter is intended to give you an overview of the main components of OGRE and
why they have been put together that way.

1.1 Object Orientation - more than just a buzzword


The name is a dead giveaway. It says Object-Oriented Graphics Rendering Engine, and
that’s exactly what it is. Ok, but why? Why did I choose to make such a big deal about
this?

Well, nowadays graphics engines are like any other large software system. They start
small, but soon they balloon into monstrously complex beasts which just can’t be all un-
derstood at once. It’s pretty hard to manage systems of this size, and even harder to make
changes to them reliably, and that’s pretty important in a field where new techniques and
approaches seem to appear every other week. Designing systems around huge files full of C
function calls just doesn’t cut it anymore - even if the whole thing is written by one person
(not likely) they will find it hard to locate that elusive bit of code after a few months and
even harder to work out how it all fits together.

Object orientation is a very popular approach to addressing the complexity problem.


It’s a step up from decomposing your code into separate functions, it groups function and
state data together in classes which are designed to represent real concepts. It allows you
to hide complexity inside easily recognised packages with a conceptually simple interface so
they are easy to recognise and have a feel of ’building blocks’ which you can plug together
again later. You can also organise these blocks so that some of them look the same on
the outside, but have very different ways of achieving their objectives on the inside, again
reducing the complexity for the developers because they only have to learn one interface.

I’m not going to teach you OO here, that’s a subject for many other books, but suffice to
say I’d seen enough benefits of OO in business systems that I was surprised most graphics
code seemed to be written in C function style. I was interested to see whether I could apply
my design experience in other types of software to an area which has long held a place in
my heart - 3D graphics engines. Some people I spoke to were of the opinion that using
full C++ wouldn’t be fast enough for a real-time graphics engine, but others (including me)
were of the opinion that, with care, and object-oriented framework can be performant. We
were right.
In summary, here’s the benefits an object-oriented approach brings to OGRE:

Abstraction
Common interfaces hide the nuances between different implementations of 3D
API and operating systems
Chapter 1: Introduction 3

Encapsulation
There is a lot of state management and context-specific actions to be done in
a graphics engine - encapsulation allows me to put the code and data nearest
to where it is used which makes the code cleaner and easier to understand, and
more reliable because duplication is avoided
Polymorphism
The behaviour of methods changes depending on the type of object you are
using, even if you only learn one interface, e.g. a class specialised for managing
indoor levels behaves completely differently from the standard scene manager,
but looks identical to other classes in the system and has the same methods
called on it

1.2 Multi-everything
I wanted to do more than create a 3D engine that ran on one 3D API, on one platform,
with one type of scene (indoor levels are most popular). I wanted OGRE to be able to
extend to any kind of scene (but yet still implement scene-specific optimisations under the
surface), any platform and any 3D API.

Therefore all the ’visible’ parts of OGRE are completely independent of platform, 3D
API and scene type. There are no dependencies on Windows types, no assumptions about
the type of scene you are creating, and the principles of the 3D aspects are based on core
maths texts rather than one particular API implementation.

Now of course somewhere OGRE has to get down to the nitty-gritty of the specifics
of the platform, API and scene, but it does this in subclasses specially designed for the
environment in question, but which still expose the same interface as the abstract versions.

For example, there is a ’Win32Window’ class which handles all the details about render-
ing windows on a Win32 platform - however the application designer only has to manipulate
it via the superclass interface ’RenderWindow’, which will be the same across all platforms.
Similarly the ’SceneManager’ class looks after the arrangement of objects in the scene
and their rendering sequence. Applications only have to use this interface, but there is a
’BspSceneManager’ class which optimises the scene management for indoor levels, meaning
you get both performance and an easy to learn interface. All applications have to do is hint
about the kind of scene they will be creating and let OGRE choose the most appropriate
implementation - this is covered in a later tutorial.

OGRE’s object-oriented nature makes all this possible. Currently OGRE runs on Win-
dows, Linux and Mac OSX using plugins to drive the underlying rendering API (currently
Direct3D or OpenGL). Applications use OGRE at the abstract level, thus ensuring that
they automatically operate on all platforms and rendering subsystems that OGRE provides
Chapter 1: Introduction 4

without any need for platform or API specific code.


Chapter 2: The Core Objects 5

2 The Core Objects

Introduction
This tutorial gives you a quick summary of the core objects that you will use in OGRE and
what they are used for.

A Word About Namespaces


OGRE uses a C++ feature called namespaces. This lets you put classes, enums, structures,
anything really within a ’namespace’ scope which is an easy way to prevent name clashes,
i.e. situations where you have 2 things called the same thing. Since OGRE is designed to
be used inside other applications, I wanted to be sure that name clashes would not be a
problem. Some people prefix their classes/types with a short code because some compilers
don’t support namespaces, but I chose to use them because they are the ’right’ way to do it.
Sorry if you have a non-compliant compiler, but hey, the C++ standard has been defined for
years, so compiler writers really have no excuse anymore. If your compiler doesn’t support
namespaces then it’s probably because it’s sh*t - get a better one. ;)

This means every class, type etc should be prefixed with ’Ogre::’, e.g. ’Ogre::Camera’,
’Ogre::Vector3’ etc which means if elsewhere in your application you have used a Vector3
type you won’t get name clashes. To avoid lots of extra typing you can add a ’using
namespace Ogre;’ statement to your code which means you don’t have to type the ’Ogre::’
prefix unless there is ambiguity (in the situation where you have another definition with
the same name).

Overview from 10,000 feet


Shown below is a diagram of some of the core objects and where they ’sit’ in the
grand scheme of things. This is not all the classes by a long shot, just a few examples
Chapter 2: The Core Objects 6

of the more more significant ones to give you an idea of how it slots together.

At the very top of the diagram is the Root object. This is your ’way in’ to the OGRE
system, and it’s where you tend to create the top-level objects that you need to deal with, like
scene managers, rendering systems and render windows, loading plugins, all the fundamental
stuff. If you don’t know where to start, Root is it for almost everything, although often it
will just give you another object which will actually do the detail work, since Root itself is
more of an organiser and facilitator object.
The majority of rest of OGRE’s classes fall into one of 3 roles:
Scene Management
This is about the contents of your scene, how it’s structured, how it’s viewed
from cameras, etc. Objects in this area are responsible for giving you a natural
declarative interface to the world you’re building; i.e. you don’t tell OGRE "set
these render states and then render 3 polygons", you tell it "I want an object
here, here and here, with these materials on them, rendered from this view",
and let it get on with it.
Resource Management
All rendering needs resources, whether it’s geometry, textures, fonts, whatever.
It’s important to manage the loading, re-use and unloading of these things
carefully, so that’s what classes in this area do.
Rendering Finally, there’s getting the visuals on the screen - this is about the lower-level
end of the rendering pipeline, the specific rendering system API objects like
Chapter 2: The Core Objects 7

buffers, render states and the like and pushing it all down the pipeline. Classes
in the Scene Management subsystem use this to get their higher-level scene
information onto the screen.
You’ll notice that scattered around the edge are a number of plugins. OGRE is designed
to be extended, and plugins are the usual way to go about it. Many of the classes in OGRE
can be subclassed and extended, whether it’s changing the scene organisation through a
custom SceneManager, adding a new render system implementation (e.g. Direct3D or
OpenGL), or providing a way to load resources from another source (say from a web location
or a database). Again this is just a small smattering of the kinds of things plugins can do,
but as you can see they can plug in to almost any aspect of the system. This way, OGRE
isn’t just a solution for one narrowly defined problem, it can extend to pretty much anything
you need it to do.

2.1 The Root object


The ’Root’ object is the entry point to the OGRE system. This object MUST be the first
one to be created, and the last one to be destroyed. In the example applications I chose
to make an instance of Root a member of my application object which ensured that it was
created as soon as my application object was, and deleted when the application object was
deleted.

The root object lets you configure the system, for example through the showConfigDia-
log() method which is an extremely handy method which performs all render system options
detection and shows a dialog for the user to customise resolution, colour depth, full screen
options etc. It also sets the options the user selects so that you can initialise the system
directly afterwards.

The root object is also your method for obtaining pointers to other objects in the system,
such as the SceneManager, RenderSystem and various other resource managers. See below
for details.

Finally, if you run OGRE in continuous rendering mode, i.e. you want to always refresh
all the rendering targets as fast as possible (the norm for games and demos, but not for
windowed utilities), the root object has a method called startRendering, which when called
will enter a continuous rendering loop which will only end when all rendering windows are
closed, or any FrameListener objects indicate that they want to stop the cycle (see below
for details of FrameListener objects).

2.2 The RenderSystem object


The RenderSystem object is actually an abstract class which defines the interface to the
underlying 3D API. It is responsible for sending rendering operations to the API and
Chapter 2: The Core Objects 8

setting all the various rendering options. This class is abstract because all the imple-
mentation is rendering API specific - there are API-specific subclasses for each rendering
API (e.g. D3DRenderSystem for Direct3D). After the system has been initialised through
Root::initialise, the RenderSystem object for the selected rendering API is available via the
Root::getRenderSystem() method.

However, a typical application should not normally need to manipulate the RenderSys-
tem object directly - everything you need for rendering objects and customising settings
should be available on the SceneManager, Material and other scene-oriented classes. It’s
only if you want to create multiple rendering windows (completely separate windows in this
case, not multiple viewports like a split-screen effect which is done via the RenderWindow
class) or access other advanced features that you need access to the RenderSystem object.

For this reason I will not discuss the RenderSystem object further in these tutorials. You
can assume the SceneManager handles the calls to the RenderSystem at the appropriate
times.

2.3 The SceneManager object


Apart from the Root object, this is probably the most critical part of the system from the
application’s point of view. Certainly it will be the object which is most used by the appli-
cation. The SceneManager is in charge of the contents of the scene which is to be rendered
by the engine. It is responsible for organising the contents using whatever technique it
deems best, for creating and managing all the cameras, movable objects (entities), lights
and materials (surface properties of objects), and for managing the ’world geometry’ which
is the sprawling static geometry usually used to represent the immovable parts of a scene.

It is to the SceneManager that you go when you want to create a camera for the scene.
It’s also where you go to retrieve or to remove a light from the scene. There is no need for
your application to keep lists of objects, the SceneManager keeps a named set of all of the
scene objects for you to access, should you need them. Look in the main documentation
under the getCamera, getLight, getEntity etc methods.

The SceneManager also sends the scene to the RenderSystem object when it is time to
render the scene. You never have to call the SceneManager:: renderScene method directly
though - it is called automatically whenever a rendering target is asked to update.

So most of your interaction with the SceneManager is during scene setup. You’re likely to
call a great number of methods (perhaps driven by some input file containing the scene data)
Chapter 2: The Core Objects 9

in order to set up your scene. You can also modify the contents of the scene dynamically
during the rendering cycle if you create your own FrameListener object (see later).

Because different scene types require very different algorithmic approaches to deciding
which objects get sent to the RenderSystem in order to attain good rendering performance,
the SceneManager class is designed to be subclassed for different scene types. The default
SceneManager object will render a scene, but it does little or no scene organisation and
you should not expect the results to be high performance in the case of large scenes. The
intention is that specialisations will be created for each type of scene such that under
the surface the subclass will optimise the scene organisation for best performance given
assumptions which can be made for that scene type. An example is the BspSceneManager
which optimises rendering for large indoor levels based on a Binary Space Partition (BSP)
tree.

The application using OGRE does not have to know which subclasses are available.
The application simply calls Root::createSceneManager(..) passing as a parameter one of a
number of scene types (e.g. ST GENERIC, ST INTERIOR etc). OGRE will automatically
use the best SceneManager subclass available for that scene type, or default to the basic
SceneManager if a specialist one is not available. This allows the developers of OGRE to
add new scene specialisations later and thus optimise previously unoptimised scene types
without the user applications having to change any code.

2.4 The ResourceGroupManager Object


The ResourceGroupManager class is actually a ’hub’ for loading of reusable resources like
textures and meshes. It is the place that you define groups for your resources, so they may
be unloaded and reloaded when you want. Servicing it are a number of ResourceManagers
which manage the individual types of resource, like TextureManager or MeshManager. In
this context, resources are sets of data which must be loaded from somewhere to provide
OGRE with the data it needs.

ResourceManagers ensure that resources are only loaded once and shared throughout
the OGRE engine. They also manage the memory requirements of the resources they look
after. They can also search in a number of locations for the resources they need, including
multiple search paths and compressed archives (ZIP files).

Most of the time you won’t interact with resource managers directly. Resource managers
will be called by other parts of the OGRE system as required, for example when you request
for a texture to be added to a Material, the TextureManager will be called for you. If you
like, you can call the appropriate resource manager directly to preload resources (if for
Chapter 2: The Core Objects 10

example you want to prevent disk access later on) but most of the time it’s ok to let OGRE
decide when to do it.

One thing you will want to do is to tell the resource managers where to look for re-
sources. You do this via Root::getSingleton().addResourceLocation, which actually passes
the information on to ResourceGroupManager.

Because there is only ever 1 instance of each resource manager in the engine, if you do
want to get a reference to a resource manager use the following syntax:
TextureManager::getSingleton().someMethod()
MeshManager::getSingleton().someMethod()

2.5 The Mesh Object


A Mesh object represents a discrete model, a set of geometry which is self-contained and
is typically fairly small on a world scale. Mesh objects are assumed to represent movable
objects and are not used for the sprawling level geometry typically used to create back-
grounds.

Mesh objects are a type of resource, and are managed by the MeshManager resource
manager. They are typically loaded from OGRE’s custom object format, the ’.mesh’ for-
mat. Mesh files are typically created by exporting from a modelling tool See Section 4.1
[Exporters], page 158 and can be manipulated through various Chapter 4 [Mesh Tools],
page 158

You can also create Mesh objects manually by calling the MeshManager::createManual
method. This way you can define the geometry yourself, but this is outside the scope of
this manual.

Mesh objects are the basis for the individual movable objects in the world, which are
called Section 2.6 [Entities], page 11.

Mesh objects can also be animated using See Section 8.1 [Skeletal Animation], page 197.
Chapter 2: The Core Objects 11

2.6 Entities
An entity is an instance of a movable object in the scene. It could be a car, a person, a
dog, a shuriken, whatever. The only assumption is that it does not necessarily have a fixed
position in the world.

Entities are based on discrete meshes, i.e. collections of geometry which are self-contained
and typically fairly small on a world scale, which are represented by the Mesh object.
Multiple entities can be based on the same mesh, since often you want to create multiple
copies of the same type of object in a scene.

You create an entity by calling the SceneManager::createEntity method, giving it a


name and specifying the name of the mesh object which it will be based on (e.g. ’muscle-
boundhero.mesh’). The SceneManager will ensure that the mesh is loaded by calling the
MeshManager resource manager for you. Only one copy of the Mesh will be loaded.

Entities are not deemed to be a part of the scene until you attach them to a SceneNode
(see the section below). By attaching entities to SceneNodes, you can create complex hier-
archical relationships between the positions and orientations of entities. You then modify
the positions of the nodes to indirectly affect the entity positions.

When a Mesh is loaded, it automatically comes with a number of materials defined. It


is possible to have more than one material attached to a mesh - different parts of the mesh
may use different materials. Any entity created from the mesh will automatically use the
default materials. However, you can change this on a per-entity basis if you like so you can
create a number of entities based on the same mesh but with different textures etc.

To understand how this works, you have to know that all Mesh objects are actually
composed of SubMesh objects, each of which represents a part of the mesh using one
Material. If a Mesh uses only one Material, it will only have one SubMesh.

When an Entity is created based on this Mesh, it is composed of (possibly) multiple


SubEntity objects, each matching 1 for 1 with the SubMesh objects from the original Mesh.
You can access the SubEntity objects using the Entity::getSubEntity method. Once you
have a reference to a SubEntity, you can change the material it uses by calling it’s setMate-
rialName method. In this way you can make an Entity deviate from the default materials
and thus create an individual looking version of it.
Chapter 2: The Core Objects 12

2.7 Materials
The Material object controls how objects in the scene are rendered. It specifies what basic
surface properties objects have such as reflectance of colours, shininess etc, how many
texture layers are present, what images are on them and how they are blended together,
what special effects are applied such as environment mapping, what culling mode is used,
how the textures are filtered etc.

Materials can either be set up programmatically, by calling SceneMan-


ager::createMaterial and tweaking the settings, or by specifying it in a ’script’ which is
loaded at runtime. See Section 3.1 [Material Scripts], page 16 for more info.

Basically everything about the appearance of an object apart from it’s shape is controlled
by the Material class.

The SceneManager class manages the master list of materials available to the scene.
The list can be added to by the application by calling SceneManager::createMaterial, or
by loading a Mesh (which will in turn load material properties). Whenever materials are
added to the SceneManager, they start off with a default set of properties; these are defined
by OGRE as the following:

• ambient reflectance = ColourValue::White (full)


• diffuse reflectance = ColourValue::White (full)
• specular reflectance = ColourValue::Black (none)
• emmissive = ColourValue::Black (none)
• shininess = 0 (not shiny)
• No texture layers (& hence no textures)
• SourceBlendFactor = SBF ONE, DestBlendFactor = SBF ZERO (opaque)
• Depth buffer checking on
• Depth buffer writing on
• Depth buffer comparison function = CMPF LESS EQUAL
• Culling mode = CULL CLOCKWISE
• Ambient lighting in scene = ColourValue(0.5, 0.5, 0.5) (mid-grey)
• Dynamic lighting enabled
• Gourad shading mode
• Solid polygon mode
• Bilinear texture filtering
You can alter these settings by calling SceneManager::getDefaultMaterialSettings() and
making the required changes to the Material which is returned.
Chapter 2: The Core Objects 13

Entities automatically have Material’s associated with them if they use a Mesh object,
since the Mesh object typically sets up it’s required materials on loading. You can also
customise the material used by an entity as described in Section 2.6 [Entities], page 11.
Just create a new Material, set it up how you like (you can copy an existing material into
it if you like using a standard assignment statement) and point the SubEntity entries at it
using SubEntity::setMaterialName().

2.8 Overlays
Overlays allow you to render 2D and 3D elements on top of the normal scene contents to
create effects like heads-up displays (HUDs), menu systems, status panels etc. The frame
rate statistics panel which comes as standard with OGRE is an example of an overlay.
Overlays can contain 2D or 3D elements. 2D elements are used for HUDs, and 3D elements
can be used to create cockpits or any other 3D object which you wish to be rendered on
top of the rest of the scene.

You can create overlays either through the SceneManager::createOverlay method, or you
can define them in an .overlay script. In reality the latter is likely to be the most practical
because it is easier to tweak (without the need to recompile the code). Note that you can
define as many overlays as you like: they all start off life hidden, and you display them by
calling their ’show()’ method. You can also show multiple overlays at once, and their Z
order is determined by the Overlay::setZOrder() method.

Creating 2D Elements
The OverlayElement class abstracts the details of 2D elements which are added to overlays.
All items which can be added to overlays are derived from this class. It is possible (and
encouraged) for users of OGRE to define their own custom subclasses of OverlayElement in
order to provide their own user controls. The key common features of all OverlayElements
are things like size, position, basic material name etc. Subclasses extend this behaviour to
include more complex properties and behaviour.

An important built-in subclass of OverlayElement is OverlayContainer. OverlayCon-


tainer is the same as a OverlayElement, except that it can contain other OverlayElements,
grouping them together (allowing them to be moved together for example) and providing
them with a local coordinate origin for easier lineup.

The third important class is OverlayManager. Whenever an application wishes to


create a 2D element to add to an overlay (or a container), it should call OverlayMan-
ager::createOverlayElement. The type of element you wish to create is identified by a
string, the reason being that it allows plugins to register new types of OverlayElement for
you to create without you having to link specifically to those libraries. For example, to cre-
Chapter 2: The Core Objects 14

ate a panel (a plain rectangular area which can contain other OverlayElements) you would
call OverlayManager::getSingleton().createOverlayElement("Panel", "myNewPanel");

Adding 2D Elements to the Overlay


Only OverlayContainers can be added direct to an overlay. The reason is that each level
of container establishes the Zorder of the elements contained within it, so if you nest sev-
eral containers, inner containers have a higher Zorder than outer ones to ensure they are
displayed correctly. To add a container (such as a Panel) to the overlay, simply call Over-
lay::add2D.

If you wish to add child elements to that container, call OverlayContainer::addChild.


Child elements can be OverlayElements or OverlayContainer instances themselves. Re-
member that the position of a child element is relative to the top-left corner of it’s parent.

A word about 2D coordinates


OGRE allows you to place and size elements based on 2 coordinate systems: relative and
pixel based.
Pixel Mode
This mode is useful when you want to specify an exact size for your overlay
items, and you don’t mind if those items get smaller on the screen if you increase
the screen resolution (in fact you might want this). In this mode the only way
to put something in the middle or at the right or bottom of the screen reliably
in any resolution is to use the aligning options, whilst in relative mode you can
do it just by using the right relative coordinates. This mode is very simple, the
top-left of the screen is (0,0) and the bottom-right of the screen depends on
the resolution. As mentioned above, you can use the aligning options to make
the horizontal and vertical coordinate origins the right, bottom or center of the
screen if you want to place pixel items in these locations without knowing the
resolution.
Relative Mode
This mode is useful when you want items in the overlay to be the same size on
the screen no matter what the resolution. In relative mode, the top-left of the
screen is (0,0) and the bottom-right is (1,1). So if you place an element at (0.5,
0.5), it’s top-left corner is placed exactly in the center of the screen, no matter
what resolution the application is running in. The same principle applies to
sizes; if you set the width of an element to 0.5, it covers half the width of the
screen. Note that because the aspect ratio of the screen is typically 1.3333 : 1
(width : height), an element with dimensions (0.25, 0.25) will not be square,
but it will take up exactly 1/16th of the screen in area terms. If you want
Chapter 2: The Core Objects 15

square-looking areas you will have to compensate using the typical aspect ratio
e.g. use (0.1875, 0.25) instead.

Transforming Overlays
Another nice feature of overlays is being able to rotate, scroll and scale them as a whole.
You can use this for zooming in / out menu systems, dropping them in from off screen and
other nice effects. See the Overlay::scroll, Overlay::rotate and Overlay::scale methods for
more information.

Scripting overlays
Overlays can also be defined in scripts. See Section 3.4 [Overlay Scripts], page 144 for
details.

GUI systems
Overlays are only really designed for non-interactive screen elements, although you can
use them as a crude GUI. For a far more complete GUI solution, we recommend CEGui
(http://www.cegui.org.uk), as demonstrated in the sample Demo Gui.
Chapter 3: Scripts 16

3 Scripts

OGRE drives many of its features through scripts in order to make it easier to set up.
The scripts are simply plain text files which can be edited in any standard text editor, and
modifying them immediately takes effect on your OGRE-based applications, without any
need to recompile. This makes prototyping a lot faster. Here are the items that OGRE lets
you script:
• Section 3.1 [Material Scripts], page 16
• Section 3.2 [Compositor Scripts], page 106
• Section 3.3 [Particle Scripts], page 121
• Section 3.4 [Overlay Scripts], page 144
• Section 3.5 [Font Definition Scripts], page 155

3.1 Material Scripts


Material scripts offer you the ability to define complex materials in a script which can be
reused easily. Whilst you could set up all materials for a scene in code using the methods
of the Material and TextureLayer classes, in practice it’s a bit unwieldy. Instead you can
store material definitions in text files which can then be loaded whenever required.

Loading scripts
Material scripts are loaded when resource groups are initialised: OGRE looks in all re-
source locations associated with the group (see Root::addResourceLocation) for files with
the ’.material’ extension and parses them. If you want to parse files manually, use Materi-
alSerializer::parseScript.

It’s important to realise that materials are not loaded completely by this parsing process:
only the definition is loaded, no textures or other resources are loaded. This is because it is
common to have a large library of materials, but only use a relatively small subset of them
in any one scene. To load every material completely in every script would therefore cause
unnecessary memory overhead. You can access a ’deferred load’ Material in the normal
way (MaterialManager::getSingleton().getByName()), but you must call the ’load’ method
before trying to use it. Ogre does this for you when using the normal material assignment
methods of entities etc.

Another important factor is that material names must be unique throughout ALL scripts
loaded by the system, since materials are always identified by name.
Chapter 3: Scripts 17

Format
Several materials may be defined in a single script. The script format is pseudo-C++, with
sections delimited by curly braces (’’, ’’), and comments indicated by starting a line with
’//’ (note, no nested form comments allowed). The general format is shown below in the
example below (note that to start with, we only consider fixed-function materials which
don’t use vertex, geometry or fragment programs, these are covered later):

// This is a comment
material walls/funkywall1
{
// first, preferred technique
technique
{
// first pass
pass
{
ambient 0.5 0.5 0.5
diffuse 1.0 1.0 1.0

// Texture unit 0
texture_unit
{
texture wibbly.jpg
scroll_anim 0.1 0.0
wave_xform scale sine 0.0 0.7 0.0 1.0
}
// Texture unit 1 (this is a multitexture pass)
texture_unit
{
texture wobbly.png
rotate_anim 0.25
colour_op add
}
}
}

// Second technique, can be used as a fallback or LOD level


technique
{
// .. and so on
}

}
Every material in the script must be given a name, which is the line ’material <blah>’
before the first opening ’’. This name must be globally unique. It can include path characters
Chapter 3: Scripts 18

(as in the example) to logically divide up your materials, and also to avoid duplicate names,
but the engine does not treat the name as hierarchical, just as a string. If you include
spaces in the name, it must be enclosed in double quotes.

NOTE: ’:’ is the delimiter for specifying material copy in the script so it can’t be used
as part of the material name.

A material can inherit from a previously defined material by using a colon : after the
material name followed by the name of the reference material to inherit from. You can in
fact even inherit just parts of a material from others; all this is covered in See Section 3.1.11
[Script Inheritence], page 96). You can also use variables in your script which can be
replaced in inheriting versions, see See Section 3.1.13 [Script Variables], page 104.

A material can be made up of many techniques (See Section 3.1.1 [Techniques], page 21)-
a technique is one way of achieving the effect you are looking for. You can supply more than
one technique in order to provide fallback approaches where a card does not have the ability
to render the preferred technique, or where you wish to define lower level of detail versions
of the material in order to conserve rendering power when objects are more distant.

Each technique can be made up of many passes (See Section 3.1.2 [Passes], page 24), that
is a complete render of the object can be performed multiple times with different settings
in order to produce composite effects. Ogre may also split the passes you have defined
into many passes at runtime, if you define a pass which uses too many texture units for
the card you are currently running on (note that it can only do this if you are not using a
fragment program). Each pass has a number of top-level attributes such as ’ambient’ to set
the amount & colour of the ambient light reflected by the material. Some of these options
do not apply if you are using vertex programs, See Section 3.1.2 [Passes], page 24 for more
details.

Within each pass, there can be zero or many texture units in use (See Section 3.1.3
[Texture Units], page 45). These define the texture to be used, and optionally some blending
operations (which use multitexturing) and texture effects.

You can also reference vertex and fragment programs (or vertex and pixel shaders, if
you want to use that terminology) in a pass with a given set of parameters. Programs
themselves are declared in separate .program scripts (See Section 3.1.4 [Declaring Ver-
tex/Geometry/Fragment Programs], page 63) and are used as described in Section 3.1.9
[Using Vertex/Geometry/Fragment Programs in a Pass], page 79.
Chapter 3: Scripts 19

Top-level material attributes


The outermost section of a material definition does not have a lot of attributes of its own
(most of the configurable parameters are within the child sections. However, it does have
some, and here they are:

lod distances (deprecated)


This option is deprecated in favour of hundefinedi [lod values], page hundefinedi now.

lod strategy
Sets the name of the LOD strategy to use. Defaults to ’Distance’ which means LOD changes
based on distance from the camera. Also supported is ’PixelCount’ which changes LOD
based on an estimate of the screen-space pixels affected.

Format: lod strategy <name>


Default: lod strategy Distance

lod values
This attribute defines the values used to control the LOD transition for this material. By
setting this attribute, you indicate that you want this material to alter the Technique that
it uses based on some metric, such as the distance from the camera, or the approximate
screen space coverage. The exact meaning of these values is determined by the option you
select for [lod strategy], page 19 - it is a list of distances for the ’Distance’ strategy, and
a list of pixel counts for the ’PixelCount’ strategy, for example. You must give it a list of
values, in order from highest LOD value to lowest LOD value, each one indicating the point
at which the material will switch to the next LOD. Implicitly, all materials activate LOD
index 0 for values less than the first entry, so you do not have to specify ’0’ at the start of
the list. You must ensure that there is at least one Technique with a [lod index], page 22
value for each value in the list (so if you specify 3 values, you must have techniques for LOD
indexes 0, 1, 2 and 3). Note you must always have at least one Technique at lod index 0.

Format: lod values <value0> <value1> <value2> ...


Default: none

Example:
lod strategy Distance lod values 300.0 600.5 1200
Chapter 3: Scripts 20

The above example would cause the material to use the best Technique at lod index 0
up to a distance of 300 world units, the best from lod index 1 from 300 up to 600, lod index
2 from 600 to 1200, and lod index 3 from 1200 upwards.

receive shadows
This attribute controls whether objects using this material can have shadows cast upon
them.

Format: receive shadows <on|off>


Default: on

Whether or not an object receives a shadow is the combination of a number of factors,


See Chapter 7 [Shadows], page 180 for full details; however this allows you to make a
material opt-out of receiving shadows if required. Note that transparent materials never
receive shadows so this option only has an effect on solid materials.

transparency casts shadows


This attribute controls whether transparent materials can cast certain kinds of shadow.

Format: transparency casts shadows <on|off>


Default: off

Whether or not an object casts a shadow is the combination of a number of factors, See
Chapter 7 [Shadows], page 180 for full details; however this allows you to make a transparent
material cast shadows, when it would otherwise not. For example, when using texture
shadows, transparent materials are normally not rendered into the shadow texture because
they should not block light. This flag overrides that.

set texture alias


This attribute associates a texture alias with a texture name.

Format: set texture alias <alias name> <texture name>

This attribute can be used to set the textures used in texture unit states that were
inherited from another material.(See Section 3.1.12 [Texture Aliases], page 100)
Chapter 3: Scripts 21

3.1.1 Techniques
A "technique" section in your material script encapsulates a single method of rendering an
object. The simplest of material definitions only contains a single technique, however since
PC hardware varies quite greatly in it’s capabilities, you can only do this if you are sure
that every card for which you intend to target your application will support the capabilities
which your technique requires. In addition, it can be useful to define simpler ways to render
a material if you wish to use material LOD, such that more distant objects use a simpler,
less performance-hungry technique.

When a material is used for the first time, it is ’compiled’. That involves scanning the
techniques which have been defined, and marking which of them are supportable using the
current rendering API and graphics card. If no techniques are supportable, your material
will render as blank white. The compilation examines a number of things, such as:
• The number of texture unit entries in each pass
Note that if the number of texture unit entries exceeds the number of texture units in
the current graphics card, the technique may still be supportable so long as a fragment
program is not being used. In this case, Ogre will split the pass which has too many
entries into multiple passes for the less capable card, and the multitexture blend will
be turned into a multipass blend (See [colour op multipass fallback], page 58).
• Whether vertex, geometry or fragment programs are used, and if so which syntax they
use (e.g. vs 1 1, ps 2 x, arbfp1 etc.)
• Other effects like cube mapping and dot3 blending
• Whether the vendor or device name of the current graphics card matches some user-
specified rules

In a material script, techniques must be listed in order of preference, i.e. the earlier tech-
niques are preferred over the later techniques. This normally means you will list your most
advanced, most demanding techniques first in the script, and list fallbacks afterwards.

To help clearly identify what each technique is used for, the technique can be named
but its optional. Techniques not named within the script will take on a name that is the
technique index number. For example: the first technique in a material is index 0, its name
would be "0" if it was not given a name in the script. The technique name must be unique
within the material or else the final technique is the resulting merge of all techniques with
the same name in the material. A warning message is posted in the Ogre.log if this occurs.
Named techniques can help when inheriting a material and modifying an existing technique:
(See Section 3.1.11 [Script Inheritence], page 96)

Format: technique name

Techniques have only a small number of attributes of their own:


Chapter 3: Scripts 22

• [scheme], page 22
• [lod index], page 22 (and also see [lod distances], page 19 in the parent material)
• [shadow caster material], page 23
• [shadow receiver material], page 23
• [gpu vendor rule], page 23
• [gpu device rule], page 23

scheme
Sets the ’scheme’ this Technique belongs to. Material schemes are used to control top-
level switching from one set of techniques to another. For example, you might use this
to define ’high’, ’medium’ and ’low’ complexity levels on materials to allow a user to pick
a performance / quality ratio. Another possibility is that you have a fully HDR-enabled
pipeline for top machines, rendering all objects using unclamped shaders, and a simpler
pipeline for others; this can be implemented using schemes. The active scheme is typically
controlled at a viewport level, and the active one defaults to ’Default’.

Format: scheme <name>


Example: scheme hdr
Default: scheme Default

lod index
Sets the level-of-detail (LOD) index this Technique belongs to.

Format: lod index <number>


NB Valid values are 0 (highest level of detail) to 65535, although this is unlikely. You should
not leave gaps in the LOD indexes between Techniques.

Example: lod index 1

All techniques must belong to a LOD index, by default they all belong to index 0, i.e.
the highest LOD. Increasing indexes denote lower levels of detail. You can (and often will)
assign more than one technique to the same LOD index, what this means is that OGRE
will pick the best technique of the ones listed at the same LOD index. For readability, it is
advised that you list your techniques in order of LOD, then in order of preference, although
the latter is the only prerequisite (OGRE determines which one is ’best’ by which one is
listed first). You must always have at least one Technique at lod index 0.

The distance at which a LOD level is applied is determined by the lod distances attribute
of the containing material, See [lod distances], page 19 for details.
Chapter 3: Scripts 23

Default: lod index 0

Techniques also contain one or more passes (and there must be at least one), See
Section 3.1.2 [Passes], page 24.

shadow caster material


When using See Section 7.2 [Texture-based Shadows], page 185 you can specify an alternate
material to use when rendering the object using this material into the shadow texture. This
is like a more advanced version of using shadow caster vertex program, however note that
for the moment you are expected to render the shadow in one pass, i.e. only the first pass
is respected.

shadow receiver material


When using See Section 7.2 [Texture-based Shadows], page 185 you can specify an alternate
material to use when performing the receiver shadow pass. Note that this explicit ’receiver’
pass is only done when you’re not using [Integrated Texture Shadows], page 189 - i.e.
the shadow rendering is done separately (either as a modulative pass, or a masked light
pass). This is like a more advanced version of using shadow receiver vertex program and
shadow receiver fragment program, however note that for the moment you are expected to
render the shadow in one pass, i.e. only the first pass is respected.

gpu vendor rule and gpu device rule


Although Ogre does a good job of detecting the capabilities of graphics cards and setting
the supportability of techniques from that, occasionally card-specific behaviour exists which
is not necessarily detectable and you may want to ensure that your materials go down a
particular path to either use or avoid that behaviour. This is what these rules are for -
you can specify matching rules so that a technique will be considered supportable only on
cards from a particular vendor, or which match a device name pattern, or will be considered
supported only if they don’t fulfill such matches.

The format of the rules are as follows:

gpu vendor rule <include|exclude> <vendor name>


gpu device rule <include|exclude> <device pattern> [case sensitive]

An ’include’ rule means that the technique will only be supported if one of the include rules
is matched (if no include rules are provided, anything will pass). An ’exclude’ rules means
that the technique is considered unsupported if any of the exclude rules are matched. You
can provide as many rules as you like, although <vendor name> and <device pattern> must
obviously be unique. The valid list of <vendor name> values is currently ’nvidia’, ’ati’,
’intel’, ’s3’, ’matrox’ and ’3dlabs’. <device pattern> can be any string, and you can use
wildcards (’*’) if you need to match variants. Here’s an example:
Chapter 3: Scripts 24

gpu vendor rule include nvidia


gpu vendor rule include intel
gpu device rule exclude *950*

These rules, if all included in one technique, will mean that the technique will only be
considered supported on graphics cards made by NVIDIA and Intel, and so long as the
device name doesn’t have ’950’ in it.

Note that these rules can only mark a technique ’unsupported’ when it would otherwise
be considered ’supported’ judging by the hardware capabilities. Even if a technique passes
these rules, it is still subject to the usual hardware support tests.

3.1.2 Passes
A pass is a single render of the geometry in question; a single call to the rendering API
with a certain set of rendering properties. A technique can have between one and 16 passes,
although clearly the more passes you use, the more expensive the technique will be to render.

To help clearly identify what each pass is used for, the pass can be named but its optional.
Passes not named within the script will take on a name that is the pass index number. For
example: the first pass in a technique is index 0 so its name would be "0" if it was not given
a name in the script. The pass name must be unique within the technique or else the final
pass is the resulting merge of all passes with the same name in the technique. A warning
message is posted in the Ogre.log if this occurs. Named passes can help when inheriting a
material and modifying an existing pass: (See Section 3.1.11 [Script Inheritence], page 96)

Passes have a set of global attributes (described below), zero or more nested texture unit
entries (See Section 3.1.3 [Texture Units], page 45), and optionally a reference to a vertex and
/ or a fragment program (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs
in a Pass], page 79).

Here are the attributes you can use in a ’pass’ section of a .material script:
• [ambient], page 25
• [diffuse], page 26
• [specular], page 26
• [emissive], page 27
• [scene blend], page 28
• [separate scene blend], page 29
• [scene blend op], page 30
Chapter 3: Scripts 25

• [separate scene blend op], page 30


• [depth check], page 30
• [depth write], page 31
• [depth func], page 31
• [depth bias], page 32
• [iteration depth bias], page 32
• [alpha rejection], page 32
• [alpha to coverage], page 33
• [light scissor], page 33
• [light clip planes], page 34
• [illumination stage], page 35
• [transparent sorting], page 35
• [normalise normals], page 35
• [cull hardware], page 36
• [cull software], page 36
• [lighting], page 37
• [shading], page 37
• [polygon mode], page 38
• [polygon mode overrideable], page 38
• [fog override], page 39
• [colour write], page 39
• [max lights], page 40
• [start light], page 40
• [iteration], page 41
• [point size], page 44
• [point sprites], page 44
• [point size attenuation], page 45
• [point size min], page 45
• [point size max], page 45

Attribute Descriptions
ambient
Sets the ambient colour reflectance properties of this pass. This attribute has no effect if a
asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL
material state.

Format: ambient (<red> <green> <blue> [<alpha>]| vertexcolour)


NB valid colour values are between 0.0 and 1.0.
Chapter 3: Scripts 26

Example: ambient 0.0 0.8 0.0

The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much ambient light (directionless global light)
is reflected. It is also possible to make the ambient reflectance track the vertex colour as
defined in the mesh by using the keyword vertexcolour instead of the colour values. The
default is full white, meaning objects are completely globally illuminated. Reduce this if
you want to see diffuse or specular light effects, or change the blend of colours to make the
object have a base colour other than white. This setting has no effect if dynamic lighting is
disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’
attribute.

Default: ambient 1.0 1.0 1.0 1.0

diffuse
Sets the diffuse colour reflectance properties of this pass. This attribute has no effect if a
asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL
material state.

Format: diffuse (<red> <green> <blue> [<alpha>]| vertexcolour)


NB valid colour values are between 0.0 and 1.0.

Example: diffuse 1.0 0.5 0.5

The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much diffuse light (light from instances of
the Light class in the scene) is reflected. It is also possible to make the diffuse reflectance
track the vertex colour as defined in the mesh by using the keyword vertexcolour instead
of the colour values. The default is full white, meaning objects reflect the maximum white
light they can from Light objects. This setting has no effect if dynamic lighting is disabled
using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’ attribute.

Default: diffuse 1.0 1.0 1.0 1.0


Chapter 3: Scripts 27

specular
Sets the specular colour reflectance properties of this pass. This attribute has no effect if a
asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL
material state.

Format: specular (<red> <green> <blue> [<alpha>]| vertexcolour) <shininess>


NB valid colour values are between 0.0 and 1.0. Shininess can be any value greater than 0.

Example: specular 1.0 1.0 1.0 12.5

The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much specular light (highlights from instances
of the Light class in the scene) is reflected. It is also possible to make the diffuse reflectance
track the vertex colour as defined in the mesh by using the keyword vertexcolour instead
of the colour values. The default is to reflect no specular light. The colour of the specular
highlights is determined by the colour parameters, and the size of the highlights by the
separate shininess parameter.. The higher the value of the shininess parameter, the sharper
the highlight ie the radius is smaller. Beware of using shininess values in the range of 0 to
1 since this causes the the specular colour to be applied to the whole surface that has the
material applied to it. When the viewing angle to the surface changes, ugly flickering will
also occur when shininess is in the range of 0 to 1. Shininess values between 1 and 128 work
best in both DirectX and OpenGL renderers. This setting has no effect if dynamic lighting
is disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’
attribute.

Default: specular 0.0 0.0 0.0 0.0 0.0

emissive
Sets the amount of self-illumination an object has. This attribute has no effect if a asm, CG,
or HLSL shader program is used. With GLSL, the shader can read the OpenGL material
state.

Format: emissive (<red> <green> <blue> [<alpha>]| vertexcolour)


NB valid colour values are between 0.0 and 1.0.
Chapter 3: Scripts 28

Example: emissive 1.0 0.0 0.0

If an object is self-illuminating, it does not need external sources to light it, ambient
or otherwise. It’s like the object has it’s own personal ambient light. Unlike the name
suggests, this object doesn’t act as a light source for other objects in the scene (if you want
it to, you have to create a light which is centered on the object). It is also possible to make
the emissive colour track the vertex colour as defined in the mesh by using the keyword
vertexcolour instead of the colour values. This setting has no effect if dynamic lighting is
disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’
attribute.

Default: emissive 0.0 0.0 0.0 0.0

scene blend
Sets the kind of blending this pass has with the existing contents of the scene. Wheras the
texture blending operations seen in the texture unit entries are concerned with blending
between texture layers, this blending is about combining the output of this pass as a whole
with the existing contents of the rendering target. This blending therefore allows object
transparency and other special effects. There are 2 formats, one using predefined blend
types, the other allowing a roll-your-own approach using source and destination factors.

Format1: scene blend <add|modulate|alpha blend|colour blend>

Example: scene blend add

This is the simpler form, where the most commonly used blending modes are enumerated
using a single parameter. Valid <blend type> parameters are:
add The colour of the rendering output is added to the scene. Good for explosions,
flares, lights, ghosts etc. Equivalent to ’scene blend one one’.
modulate The colour of the rendering output is multiplied with the scene contents. Gen-
erally colours and darkens the scene, good for smoked glass, semi-transparent
objects etc. Equivalent to ’scene blend dest colour zero’.
colour blend
Colour the scene based on the brightness of the input colours, but don’t darken.
Equivalent to ’scene blend src colour one minus src colour’
alpha blend
The alpha value of the rendering output is used as a mask. Equivalent to
’scene blend src alpha one minus src alpha’
Chapter 3: Scripts 29

Format2: scene blend <src factor> <dest factor>

Example: scene blend one one minus dest alpha

This version of the method allows complete control over the blending operation, by
specifying the source and destination blending factors. The resulting colour which is written
to the rendering target is (texture * sourceFactor) + (scene pixel * destFactor). Valid values
for both parameters are:
one Constant value of 1.0
zero Constant value of 0.0
dest colour
The existing pixel colour
src colour The texture pixel (texel) colour
one minus dest colour
1 - (dest colour)
one minus src colour
1 - (src colour)
dest alpha The existing pixel alpha value
src alpha The texel alpha value
one minus dest alpha
1 - (dest alpha)
one minus src alpha
1 - (src alpha)

Default: scene blend one zero (opaque)

Also see [separate scene blend], page 29.

separate scene blend


This option operates in exactly the same way as [scene blend], page 28, except that it allows
you to specify the operations to perform between the rendered pixel and the frame buffer
separately for colour and alpha components. By nature this option is only useful when
rendering to targets which have an alpha channel which you’ll use for later processing, such
as a render texture.

Format1: separate scene blend <simple colour blend> <simple alpha blend>
Chapter 3: Scripts 30

Example: separate scene blend add modulate

This example would add colour components but multiply alpha components. The blend
modes available are as in [scene blend], page 28. The more advanced form is also available:

Format2: separate scene blend <colour src factor> <colour dest factor> <al-
pha src factor> <alpha dest factor>

Example: separate scene blend one one minus dest alpha one one

Again the options available in the second format are the same as those in the second
format of [scene blend], page 28.

scene blend op
This directive changes the operation which is applied between the two components of the
scene blending equation, which by default is ’add’ (sourceFactor * source + destFactor *
dest). You may change this to ’add’, ’subtract’, ’reverse subtract’, ’min’ or ’max’.

Format: scene blend op <add|subtract|reverse subtract|min|max>

Default: scene blend op add

separate scene blend op


This directive is as scene blend op, except that you can set the operation for colour and
alpha separately.
Format: separate scene blend op <colourOp> <alphaOp>

Default: separate scene blend op add add

depth check
Sets whether or not this pass renders with depth-buffer checking on or not.

Format: depth check <on|off>


Chapter 3: Scripts 31

If depth-buffer checking is on, whenever a pixel is about to be written to the frame


buffer the depth buffer is checked to see if the pixel is in front of all other pixels written at
that point. If not, the pixel is not written. If depth checking is off, pixels are written no
matter what has been rendered before. Also see depth func for more advanced depth check
configuration.

Default: depth check on

depth write
Sets whether or not this pass renders with depth-buffer writing on or not.

Format: depth write <on|off>

If depth-buffer writing is on, whenever a pixel is written to the frame buffer the depth
buffer is updated with the depth value of that new pixel, thus affecting future rendering
operations if future pixels are behind this one. If depth writing is off, pixels are written
without updating the depth buffer. Depth writing should normally be on but can be turned
off when rendering static backgrounds or when rendering a collection of transparent objects
at the end of a scene so that they overlap each other correctly.

Default: depth write on

depth func
Sets the function used to compare depth values when depth checking is on.

Format: depth func <func>

If depth checking is enabled (see depth check) a comparison occurs between the depth
value of the pixel to be written and the current contents of the buffer. This comparison is
normally less equal, i.e. the pixel is written if it is closer (or at the same distance) than the
current contents. The possible functions are:
always fail
Never writes a pixel to the render target
always pass
Always writes a pixel to the render target
less Write if (new Z < existing Z)
Chapter 3: Scripts 32

less equal Write if (new Z <= existing Z)

equal Write if (new Z == existing Z)

not equal Write if (new Z != existing Z)

greater equal
Write if (new Z >= existing Z)

greater Write if (new Z >existing Z)

Default: depth func less equal

depth bias
Sets the bias applied to the depth value of this pass. Can be used to make coplanar polygons
appear on top of others e.g. for decals.

Format: depth bias <constant bias> [<slopescale bias>]

The final depth bias value is constant bias * minObservableDepth + maxSlope *


slopescale bias. Slope scale biasing is relative to the angle of the polygon to the camera,
which makes for a more appropriate bias value, but this is ignored on some older hardware.
Constant biasing is expressed as a factor of the minimum depth value, so a value of 1 will
nudge the depth by one ’notch’ if you will. Also see [iteration depth bias], page 32

iteration depth bias


Sets an additional bias derived from the number of times a given pass has been iterated.
Operates just like [depth bias], page 32 except that it applies an additional bias factor
to the base depth bias value, multiplying the provided value by the number of times this
pass has been iterated before, through one of the [iteration], page 41 variants. So the
first time the pass will get the depth bias value, the second time it will get depth bias +
iteration depth bias, the third time it will get depth bias + iteration depth bias * 2, and
so on. The default is zero.

Format: iteration depth bias <bias per iteration>


Chapter 3: Scripts 33

alpha rejection
Sets the way the pass will have use alpha to totally reject pixels from the pipeline.

Format: alpha rejection <function> <value>

Example: alpha rejection greater equal 128

The function parameter can be any of the options listed in the material depth function
attribute. The value parameter can theoretically be any value between 0 and 255, but is
best limited to 0 or 128 for hardware compatibility.

Default: alpha rejection always pass

alpha to coverage
Sets whether this pass will use ’alpha to coverage’, a way to multisample alpha texture
edges so they blend more seamlessly with the background. This facility is typically only
available on cards from around 2006 onwards, but it is safe to enable it anyway - Ogre will
just ignore it if the hardware does not support it. The common use for alpha to coverage
is foliage rendering and chain-link fence style textures.

Format: alpha to coverage <on|off>

Default: alpha to coverage off

light scissor
Sets whether when rendering this pass, rendering will be limited to a screen-space scissor
rectangle representing the coverage of the light(s) being used in this pass, derived from their
attenuation ranges.

Format: light scissor <on|off>

Default: light scissor off


Chapter 3: Scripts 34

This option is usually only useful if this pass is an additive lighting pass, and is at least
the second one in the technique. Ie areas which are not affected by the current light(s) will
never need to be rendered. If there is more than one light being passed to the pass, then
the scissor is defined to be the rectangle which covers all lights in screen-space. Directional
lights are ignored since they are infinite.

This option does not need to be specified if you are using a standard additive
shadow mode, i.e. SHADOWTYPE STENCIL ADDITIVE or SHADOW-
TYPE TEXTURE ADDITIVE, since it is the default behaviour to use a scissor for each
additive shadow pass. However, if you’re not using shadows, or you’re using [Integrated
Texture Shadows], page 189 where passes are specified in a custom manner, then this
could be of use to you.

light clip planes


Sets whether when rendering this pass, triangle setup will be limited to clipping volume
covered by the light. Directional lights are ignored, point lights clip to a cube the size of
the attenuation range or the light, and spotlights clip to a pyramid bounding the spotlight
angle and attenuation range.

Format: light clip planes <on|off>

Default: light clip planes off

This option will only function if there is a single non-directional light being used in this
pass. If there is more than one light, or only directional lights, then no clipping will occur.
If there are no lights at all then the objects won’t be rendered at all.

When using a standard additive shadow mode, ie SHADOWTYPE STENCIL ADDITIVE


or SHADOWTYPE TEXTURE ADDITIVE, you have the option of enabling clipping for
all light passes by calling SceneManager::setShadowUseLightClipPlanes regardless of this
pass setting, since rendering is done lightwise anyway. This is off by default since using
clip planes is not always faster - it depends on how much of the scene the light volumes
cover. Generally the smaller your lights are the more chance you’ll see a benefit rather
than a penalty from clipping. If you’re not using shadows, or you’re using [Integrated
Texture Shadows], page 189 where passes are specified in a custom manner, then specify
the option per-pass using this attribute.

A specific note about OpenGL: user clip planes are completely ignored when you use
an ARB vertex program. This means light clip planes won’t help much if you use ARB
vertex programs on GL, although OGRE will perform some optimisation of its own, in
that if it sees that the clip volume is completely off-screen, it won’t perform a render at
all. When using GLSL, user clipping can be used but you have to use glClipVertex in your
Chapter 3: Scripts 35

shader, see the GLSL documentation for more information. In Direct3D user clip planes
are always respected.

illumination stage
When using an additive lighting mode (SHADOWTYPE STENCIL ADDITIVE or SHAD-
OWTYPE TEXTURE ADDITIVE), the scene is rendered in 3 discrete stages, ambient (or
pre-lighting), per-light (once per light, with shadowing) and decal (or post-lighting). Usually
OGRE figures out how to categorise your passes automatically, but there are some effects
you cannot achieve without manually controlling the illumination. For example specular
effects are muted by the typical sequence because all textures are saved until the ’decal’
stage which mutes the specular effect. Instead, you could do texturing within the per-light
stage if it’s possible for your material and thus add the specular on after the decal texturing,
and have no post-light rendering.

If you assign an illumination stage to a pass you have to assign it to all passes in the
technique otherwise it will be ignored. Also note that whilst you can have more than one
pass in each group, they cannot alternate, ie all ambient passes will be before all per-light
passes, which will also be before all decal passes. Within their categories the passes will
retain their ordering though.

Format: illumination stage <ambient|per light|decal>

Default: none (autodetect)

normalise normals
Sets whether or not this pass renders with all vertex normals being automatically re-
normalised.

Format: normalise normals <on|off>

Scaling objects causes normals to also change magnitude, which can throw off your
lighting calculations. By default, the SceneManager detects this and will automatically re-
normalise normals for any scaled object, but this has a cost. If you’d prefer to control this
manually, call SceneManager::setNormaliseNormalsOnScale(false) and then use this option
on materials which are sensitive to normals being resized.

Default: normalise normals off


Chapter 3: Scripts 36

transparent sorting
Sets if transparent textures should be sorted by depth or not.

Format: transparent sorting <on|off|force>

By default all transparent materials are sorted such that renderables furthest away from
the camera are rendered first. This is usually the desired behaviour but in certain cases this
depth sorting may be unnecessary and undesirable. If for example it is necessary to ensure
the rendering order does not change from one frame to the next. In this case you could set
the value to ’off’ to prevent sorting.

You can also use the keyword ’force’ to force transparent sorting on, regardless of other
circumstances. Usually sorting is only used when the pass is also transparent, and has a
depth write or read which indicates it cannot reliably render without sorting. By using
’force’, you tell OGRE to sort this pass no matter what other circumstances are present.

Default: transparent sorting on

cull hardware
Sets the hardware culling mode for this pass.

Format: cull hardware <clockwise|anticlockwise|none>

A typical way for the hardware rendering engine to cull triangles is based on the ’vertex
winding’ of triangles. Vertex winding refers to the direction in which the vertices are
passed or indexed to in the rendering operation as viewed from the camera, and will wither
be clockwise or anticlockwise (that’s ’counterclockwise’ for you Americans out there ;).
If the option ’cull hardware clockwise’ is set, all triangles whose vertices are viewed in
clockwise order from the camera will be culled by the hardware. ’anticlockwise’ is the
reverse (obviously), and ’none’ turns off hardware culling so all triagles are rendered (useful
for creating 2-sided passes).

Default: cull hardware clockwise


NB this is the same as OpenGL’s default but the opposite of Direct3D’s default (because
Ogre uses a right-handed coordinate system like OpenGL).
Chapter 3: Scripts 37

cull software
Sets the software culling mode for this pass.

Format: cull software <back|front|none>

In some situations the engine will also cull geometry in software before sending it to the
hardware renderer. This setting only takes effect on SceneManager’s that use it (since it is
best used on large groups of planar world geometry rather than on movable geometry since
this would be expensive), but if used can cull geometry before it is sent to the hardware.
In this case the culling is based on whether the ’back’ or ’front’ of the triangle is facing
the camera - this definition is based on the face normal (a vector which sticks out of the
front side of the polygon perpendicular to the face). Since Ogre expects face normals
to be on anticlockwise side of the face, ’cull software back’ is the software equivalent of
’cull hardware clockwise’ setting, which is why they are both the default. The naming is
different to reflect the way the culling is done though, since most of the time face normals are
pre-calculated and they don’t have to be the way Ogre expects - you could set ’cull hardware
none’ and completely cull in software based on your own face normals, if you have the right
SceneManager which uses them.

Default: cull software back

lighting
Sets whether or not dynamic lighting is turned on for this pass or not. If lighting is turned
off, all objects rendered using the pass will be fully lit. This attribute has no effect if a
vertex program is used.

Format: lighting <on|off>

Turning dynamic lighting off makes any ambient, diffuse, specular, emissive and shading
properties for this pass redundant. When lighting is turned on, objects are lit according to
their vertex normals for diffuse and specular light, and globally for ambient and emissive.

Default: lighting on
Chapter 3: Scripts 38

shading
Sets the kind of shading which should be used for representing dynamic lighting for this
pass.

Format: shading <flat|gouraud|phong>

When dynamic lighting is turned on, the effect is to generate colour values at each vertex.
Whether these values are interpolated across the face (and how) depends on this setting.

flat No interpolation takes place. Each face is shaded with a single colour deter-
mined from the first vertex in the face.
gouraud Colour at each vertex is linearly interpolated across the face.
phong Vertex normals are interpolated across the face, and these are used to determine
colour at each pixel. Gives a more natural lighting effect but is more expensive
and works better at high levels of tessellation. Not supported on all hardware.
Default: shading gouraud

polygon mode
Sets how polygons should be rasterised, i.e. whether they should be filled in, or just drawn
as lines or points.

Format: polygon mode <solid|wireframe|points>

solid The normal situation - polygons are filled in.


wireframe Polygons are drawn in outline only.
points Only the points of each polygon are rendered.
Default: polygon mode solid

polygon mode overrideable


Sets whether or not the [polygon mode], page 38 set on this pass can be downgraded by
the camera, if the camera itself is set to a lower polygon mode. If set to false, this pass will
always be rendered at its own chosen polygon mode no matter what the camera says. The
default is true.
Chapter 3: Scripts 39

Format: polygon mode overrideable <true|false>

fog override
Tells the pass whether it should override the scene fog settings, and enforce it’s own. Very
useful for things that you don’t want to be affected by fog when the rest of the scene is
fogged, or vice versa. Note that this only affects fixed-function fog - the original scene
fog parameters are still sent to shaders which use the fog params parameter binding (this
allows you to turn off fixed function fog and calculate it in the shader instead; if you want
to disable shader fog you can do that through shader parameters anyway).

Format: fog override <override?> [<type> <colour> <density> <start> <end>]

Default: fog override false

If you specify ’true’ for the first parameter and you supply the rest of the parameters,
you are telling the pass to use these fog settings in preference to the scene settings, whatever
they might be. If you specify ’true’ but provide no further parameters, you are telling this
pass to never use fogging no matter what the scene says. Here is an explanation of the
parameters:

type none = No fog, equivalent of just using ’fog override true’


linear = Linear fog from the <start> and <end> distances
exp = Fog increases exponentially from the camera (fog = 1/e^(distance *
density)), use <density> param to control it
exp2 = Fog increases at the square of FOG EXP, i.e. even quicker (fog =
1/e^(distance * density)^2), use <density> param to control it
colour Sequence of 3 floating point values from 0 to 1 indicating the red, green and
blue intensities
density The density parameter used in the ’exp’ or ’exp2’ fog types. Not used in linear
mode but param must still be there as a placeholder
start The start distance from the camera of linear fog. Must still be present in other
modes, even though it is not used.
end The end distance from the camera of linear fog. Must still be present in other
modes, even though it is not used.

Example: fog override true exp 1 1 1 0.002 100 10000


Chapter 3: Scripts 40

colour write
Sets whether or not this pass renders with colour writing on or not.

Format: colour write <on|off>

If colour writing is off no visible pixels are written to the screen during this pass. You
might think this is useless, but if you render with colour writing off, and with very minimal
other settings, you can use this pass to initialise the depth buffer before subsequently ren-
dering other passes which fill in the colour data. This can give you significant performance
boosts on some newer cards, especially when using complex fragment programs, because if
the depth check fails then the fragment program is never run.

Default: colour write on

start light
Sets the first light which will be considered for use with this pass.

Format: start light <number>

You can use this attribute to offset the starting point of the lights for this pass. In other
words, if you set start light to 2 then the first light to be processed in that pass will be the
third actual light in the applicable list. You could use this option to use different passes to
process the first couple of lights versus the second couple of lights for example, or use it in
conjunction with the [iteration], page 41 option to start the iteration from a given point in
the list (e.g. doing the first 2 lights in the first pass, and then iterating every 2 lights from
then on perhaps).

Default: start light 0

max lights
Sets the maximum number of lights which will be considered for use with this pass.

Format: max lights <number>

The maximum number of lights which can be used when rendering fixed-function ma-
terials is set by the rendering system, and is typically set at 8. When you are using the
programmable pipeline (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in
a Pass], page 79) this limit is dependent on the program you are running, or, if you use
Chapter 3: Scripts 41

’iteration once per light’ or a variant (See [iteration], page 41), it effectively only bounded
by the number of passes you are willing to use. If you are not using pass iteration, the light
limit applies once for this pass. If you are using pass iteration, the light limit applies across
all iterations of this pass - for example if you have 12 lights in range with an ’iteration
once per light’ setup but your max lights is set to 4 for that pass, the pass will only iterate
4 times.

Default: max lights 8

iteration
Sets whether or not this pass is iterated, i.e. issued more than once.

Format 1: iteration <once | once per light> [lightType]

Format 2: iteration <number> [<per light> [lightType]]

Format 3: iteration <number> [<per n lights> <num lights> [lightType]]

Examples:
iteration once
The pass is only executed once which is the default behaviour.
iteration once per light point
The pass is executed once for each point light.
iteration 5 The render state for the pass will be setup and then the draw call will execute
5 times.
iteration 5 per light point
The render state for the pass will be setup and then the draw call will execute
5 times. This will be done for each point light.
iteration 1 per n lights 2 point
The render state for the pass will be setup and the draw call executed once for
every 2 lights.

By default, passes are only issued once. However, if you use the programmable pipeline,
or you wish to exceed the normal limits on the number of lights which are supported, you
might want to use the once per light option. In this case, only light index 0 is ever used, and
the pass is issued multiple times, each time with a different light in light index 0. Clearly
this will make the pass more expensive, but it may be the only way to achieve certain effects
such as per-pixel lighting effects which take into account 1..n lights.
Chapter 3: Scripts 42

Using a number instead of "once" instructs the pass to iterate more than once after the
render state is setup. The render state is not changed after the initial setup so repeated
draw calls are very fast and ideal for passes using programmable shaders that must iterate
more than once with the same render state i.e. shaders that do fur, motion blur, special
filtering.

If you use once per light, you should also add an ambient pass to the technique before
this pass, otherwise when no lights are in range of this object it will not get rendered at
all; this is important even when you have no ambient light in the scene, because you would
still want the objects silhouette to appear.

The lightType parameter to the attribute only applies if you use once per light,
per light, or per n lights and restricts the pass to being run for lights of a single type
(either ’point’, ’directional’ or ’spot’). In the example, the pass will be run once per point
light. This can be useful because when you’re writing a vertex / fragment program it is
a lot easier if you can assume the kind of lights you’ll be dealing with. However at least
point and directional lights can be dealt with in one way.

Default: iteration once

Example: Simple Fur shader material script that uses a second pass with 10 iterations
to grow the fur:
// GLSL simple Fur
vertex_program GLSLDemo/FurVS glsl
{
source fur.vert
default_params
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named_auto passNumber pass_number
param_named_auto multiPassNumber pass_iteration_number
param_named furLength float 0.15
}
}

fragment_program GLSLDemo/FurFS glsl


{
source fur.frag
default_params
{
param_named Ka float 0.2
param_named Kd float 0.5
Chapter 3: Scripts 43

param_named Ks float 0.0


param_named furTU int 0
}
}

material Fur
{
technique GLSL
{
pass base_coat
{
ambient 0.7 0.7 0.7
diffuse 0.5 0.8 0.5
specular 1.0 1.0 1.0 1.5

vertex_program_ref GLSLDemo/FurVS
{
}

fragment_program_ref GLSLDemo/FurFS
{
}

texture_unit
{
texture Fur.tga
tex_coord_set 0
filtering trilinear
}

pass grow_fur
{
ambient 0.7 0.7 0.7
diffuse 0.8 1.0 0.8
specular 1.0 1.0 1.0 64
depth_write off

scene_blend src_alpha one


iteration 10

vertex_program_ref GLSLDemo/FurVS
{
}

fragment_program_ref GLSLDemo/FurFS
Chapter 3: Scripts 44

{
}

texture_unit
{
texture Fur.tga
tex_coord_set 0
filtering trilinear
}
}
}
}
Note: use gpu program auto parameters [pass number], page 91 and
[pass iteration number], page 92 to tell the vertex, geometry or fragment pro-
gram the pass number and iteration number.

point size
This setting allows you to change the size of points when rendering a point list, or a list of
point sprites. The interpretation of this command depends on the [point size attenuation],
page 45 option - if it is off (the default), the point size is in screen pixels, if it is on, it
expressed as normalised screen coordinates (1.0 is the height of the screen) when the point
is at the origin.

NOTE: Some drivers have an upper limit on the size of points they support - this can
even vary between APIs on the same card! Don’t rely on point sizes that cause the points
to get very large on screen, since they may get clamped on some cards. Upper sizes can
range from 64 to 256 pixels.

Format: point size <size>

Default: point size 1.0

point sprites
This setting specifies whether or not hardware point sprite rendering is enabled for this
pass. Enabling it means that a point list is rendered as a list of quads rather than a list of
dots. It is very useful to use this option if you’re using a BillboardSet and only need to use
point oriented billboards which are all of the same size. You can also use it for any other
point list render.
Chapter 3: Scripts 45

Format: point sprites <on|off>

Default: point sprites off

point size attenuation


Defines whether point size is attenuated with view space distance, and in what fashion. This
option is especially useful when you’re using point sprites (See [point sprites], page 44) since
it defines how they reduce in size as they get further away from the camera. You can also
disable this option to make point sprites a constant screen size (like points), or enable it for
points so they change size with distance.

You only have to provide the final 3 parameters if you turn attenuation on. The formula
for attenuation is that the size of the point is multiplied by 1 / (constant + linear * dist +
quadratic * d^2); therefore turning it off is equivalent to (constant = 1, linear = 0, quadratic
= 0) and standard perspective attenuation is (constant = 0, linear = 1, quadratic = 0).
The latter is assumed if you leave out the final 3 parameters when you specify ’on’.

Note that the resulting attenuated size is clamped to the minimum and maximum point
size, see the next section.

Format: point size attenuation <on|off> [constant linear quadratic] Default:


point size attenuation off

point size min


Sets the minimum point size after attenuation ([point size attenuation], page 45). For
details on the size metrics, See [point size], page 44.

Format: point size min <size> Default: point size min 0

point size max


Sets the maximum point size after attenuation ([point size attenuation], page 45). For
details on the size metrics, See [point size], page 44. A value of 0 means the maximum is
set to the same as the max size reported by the current card.

Format: point size max <size> Default: point size max 0

3.1.3 Texture Units


Here are the attributes you can use in a ’texture unit’ section of a .material script:
Chapter 3: Scripts 46

Available Texture Layer Attributes


• [texture alias], page 46
• [texture], page 47
• [anim texture], page 50
• [cubic texture], page 50
• [tex coord set], page 52
• [tex address mode], page 53
• [tex border colour], page 53
• [filtering], page 54
• [max anisotropy], page 55
• [mipmap bias], page 55
• [colour op], page 55
• [colour op ex], page 56
• [colour op multipass fallback], page 58
• [alpha op ex], page 59
• [env map], page 59
• [scroll], page 60
• [scroll anim], page 60
• [rotate], page 60
• [rotate anim], page 61
• [scale], page 61
• [wave xform], page 61
• [transform], page 62
• [binding type], page 51
• [content type], page 51
You can also use a nested ’texture source’ section in order to use a special add-in as a
source of texture data, See Chapter 6 [External Texture Sources], page 176 for details.

Attribute Descriptions
texture alias
Sets the alias name for this texture unit.

Format: texture alias <name>

Example: texture alias NormalMap


Chapter 3: Scripts 47

Setting the texture alias name is useful if this material is to be inherited by other other
materials and only the textures will be changed in the new material.(See Section 3.1.12
[Texture Aliases], page 100)

Default: If a texture unit has a name then the texture alias defaults to the texture unit
name.

texture
Sets the name of the static texture image this layer will use.

Format: texture <texturename> [<type>] [unlimited | numMipMaps] [alpha] [<PixelFor-


mat>] [gamma]

Example: texture funkywall.jpg

This setting is mutually exclusive with the anim texture attribute. Note that the texture
file cannot include spaces. Those of you Windows users who like spaces in filenames, please
get over it and use underscores instead.

The ’type’ parameter allows you to specify a the type of texture to create - the default is
’2d’, but you can override this; here’s the full list:
1d A 1-dimensional texture; that is, a texture which is only 1 pixel high. These
kinds of textures can be useful when you need to encode a function in a texture
and use it as a simple lookup, perhaps in a fragment program. It is impor-
tant that you use this setting when you use a fragment program which uses
1-dimensional texture coordinates, since GL requires you to use a texture type
that matches (D3D will let you get away with it, but you ought to plan for
cross-compatibility). Your texture widths should still be a power of 2 for best
compatibility and performance.
2d The default type which is assumed if you omit it, your texture has a width and
a height, both of which should preferably be powers of 2, and if you can, make
them square because this will look best on the most hardware. These can be
addressed with 2D texture coordinates.
3d A 3 dimensional texture i.e. volume texture. Your texture has a width, a height,
both of which should be powers of 2, and has depth. These can be addressed
with 3d texture coordinates i.e. through a pixel shader.
cubic This texture is made up of 6 2D textures which are pasted around the inside of
a cube. Can be addressed with 3D texture coordinates and are useful for cubic
reflection maps and normal maps.
The ’numMipMaps’ option allows you to specify the number of mipmaps to generate for
this texture. The default is ’unlimited’ which means mips down to 1x1 size are generated.
Chapter 3: Scripts 48

You can specify a fixed number (even 0) if you like instead. Note that if you use the same
texture in many material scripts, the number of mipmaps generated will conform to the
number specified in the first texture unit used to load the texture - so be consistent with
your usage.

The ’alpha’ option allows you to specify that a single channel (luminance) texture should
be loaded as alpha, rather than the default which is to load it into the red channel. This
can be helpful if you want to use alpha-only textures in the fixed function pipeline.
Default: none

The <PixelFormat> option allows you to specify the desired pixel format of the texture
to create, which may be different to the pixel format of the texture file being loaded. Bear
in mind that the final pixel format will be constrained by hardware capabilities so you may
not get exactly what you ask for. The available options are:
PF L8 8-bit pixel format, all bits luminance.
PF L16 16-bit pixel format, all bits luminance.
PF A8 8-bit pixel format, all bits alpha.
PF A4L4 8-bit pixel format, 4 bits alpha, 4 bits luminance.
PF BYTE LA
2 byte pixel format, 1 byte luminance, 1 byte alpha
PF R5G6B5
16-bit pixel format, 5 bits red, 6 bits green, 5 bits blue.
PF B5G6R5
16-bit pixel format, 5 bits blue, 6 bits green, 5 bits red.
PF R3G3B2
8-bit pixel format, 3 bits red, 3 bits green, 2 bits blue.
PF A4R4G4B4
16-bit pixel format, 4 bits for alpha, red, green and blue.
PF A1R5G5B5
16-bit pixel format, 1 bit for alpha, 5 bits for red, green and blue.
PF R8G8B8
24-bit pixel format, 8 bits for red, green and blue.
PF B8G8R8
24-bit pixel format, 8 bits for blue, green and red.
PF A8R8G8B8
32-bit pixel format, 8 bits for alpha, red, green and blue.
PF A8B8G8R8
32-bit pixel format, 8 bits for alpha, blue, green and red.
Chapter 3: Scripts 49

PF B8G8R8A8
32-bit pixel format, 8 bits for blue, green, red and alpha.
PF R8G8B8A8
32-bit pixel format, 8 bits for red, green, blue and alpha.
PF X8R8G8B8
32-bit pixel format, 8 bits for red, 8 bits for green, 8 bits for blue like
PF A8R8G8B8, but alpha will get discarded
PF X8B8G8R8
32-bit pixel format, 8 bits for blue, 8 bits for green, 8 bits for red like
PF A8B8G8R8, but alpha will get discarded
PF A2R10G10B10
32-bit pixel format, 2 bits for alpha, 10 bits for red, green and blue.
PF A2B10G10R10
32-bit pixel format, 2 bits for alpha, 10 bits for blue, green and red.
PF FLOAT16 R
16-bit pixel format, 16 bits (float) for red
PF FLOAT16 RGB
48-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits
(float) for blue
PF FLOAT16 RGBA
64-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits
(float) for blue, 16 bits (float) for alpha
PF FLOAT32 R
16-bit pixel format, 16 bits (float) for red
PF FLOAT32 RGB
96-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits
(float) for blue
PF FLOAT32 RGBA
128-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits
(float) for blue, 32 bits (float) for alpha
PF SHORT RGBA
64-bit pixel format, 16 bits for red, green, blue and alpha
The ’gamma’ option informs the renderer that you want the graphics hardware to per-
form gamma correction on the texture values as they are sampled for rendering. This is
only applicable for textures which have 8-bit colour channels (e.g.PF R8G8B8). Often,
8-bit per channel textures will be stored in gamma space in order to increase the precision
of the darker colours (http://en.wikipedia.org/wiki/Gamma_correction) but this can
throw out blending and filtering calculations since they assume linear space colour values.
For the best quality shading, you may want to enable gamma correction so that the hard-
ware converts the texture values to linear space for you automatically when sampling the
texture, then the calculations in the pipeline can be done in a reliable linear colour space.
Chapter 3: Scripts 50

When rendering to a final 8-bit per channel display, you’ll also want to convert back to
gamma space which can be done in your shader (by raising to the power 1/2.2) or you can
enable gamma correction on the texture being rendered to or the render window. Note
that the ’gamma’ option on textures is applied on loading the texture so must be specified
consistently if you use this texture in multiple places.

anim texture
Sets the images to be used in an animated texture layer. In this case an animated texture
layer means one which has multiple frames, each of which is a separate image file. There
are 2 formats, one for implicitly determined image names, one for explicitly named images.

Format1 (short): anim texture <base name> <num frames> <duration>

Example: anim texture flame.jpg 5 2.5

This sets up an animated texture layer made up of 5 frames named flame 0.jpg,
flame 1.jpg, flame 2.jpg etc, with an animation length of 2.5 seconds (2fps). If duration is
set to 0, then no automatic transition takes place and frames must be changed manually
in code.

Format2 (long): anim texture <frame1> <frame2> ... <duration>

Example: anim texture flamestart.jpg flamemore.png flameagain.jpg moreflame.jpg last-


flame.tga 2.5

This sets up the same duration animation but from 5 separately named image files. The
first format is more concise, but the second is provided if you cannot make your images
conform to the naming standard required for it.

Default: none

cubic texture
Sets the images used in a cubic texture, i.e. one made up of 6 individual images making
up the faces of a cube. These kinds of textures are used for reflection maps (if hardware
supports cubic reflection maps) or skyboxes. There are 2 formats, a brief format expecting
image names of a particular format and a more flexible but longer format for arbitrarily
Chapter 3: Scripts 51

named textures.

Format1 (short): cubic texture <base name> <combinedUVW|separateUV>

The base name in this format is something like ’skybox.jpg’, and the system will expect
you to provide skybox fr.jpg, skybox bk.jpg, skybox up.jpg, skybox dn.jpg, skybox lf.jpg,
and skybox rt.jpg for the individual faces.

Format2 (long): cubic texture <front> <back> <left> <right> <up> <down> separateUV

In this case each face is specified explicitly, incase you don’t want to conform to the
image naming standards above. You can only use this for the separateUV version since the
combinedUVW version requires a single texture name to be assigned to the combined 3D
texture (see below).

In both cases the final parameter means the following:


combinedUVW
The 6 textures are combined into a single ’cubic’ texture map which is then ad-
dressed using 3D texture coordinates with U, V and W components. Necessary
for reflection maps since you never know which face of the box you are going
to need. Note that not all cards support cubic environment mapping.
separateUV
The 6 textures are kept separate but are all referenced by this single texture
layer. One texture at a time is active (they are actually stored as 6 frames),
and they are addressed using standard 2D UV coordinates. This type is good
for skyboxes since only one face is rendered at one time and this has more
guaranteed hardware support on older cards.

Default: none

binding type
Tells this texture unit to bind to either the fragment processing unit or the vertex processing
unit (for Section 3.1.10 [Vertex Texture Fetch], page 95).

Format: binding type <vertex|fragment> Default: binding type fragment

content type
Tells this texture unit where it should get its content from. The default is to get tex-
ture content from a named texture, as defined with the [texture], page 47, [cubic texture],
Chapter 3: Scripts 52

page 50, [anim texture], page 50 attributes. However you can also pull texture information
from other automated sources. The options are:

named The default option, this derives texture content from a texture name, loaded
by ordinary means from a file or having been manually created with a given
name.

shadow This option allows you to pull in a shadow texture, and is only valid when
you use texture shadows and one of the ’custom sequence’ shadowing types
(See Chapter 7 [Shadows], page 180). The shadow texture in question will be
from the ’n’th closest light that casts shadows, unless you use light-based pass
iteration or the light start option which may start the light index higher. When
you use this option in multiple texture units within the same pass, each one
references the next shadow texture. The shadow texture index is reset in the
next pass, in case you want to take into account the same shadow textures again
in another pass (e.g. a separate specular / gloss pass). By using this option,
the correct light frustum projection is set up for you for use in fixed-function,
if you use shaders just reference the texture viewproj matrix auto parameter
in your shader.

compositor
This option allows you to reference a texture from a compositor, and is only
valid when the pass is rendered within a compositor sequence. This can be ei-
ther in a render scene directive inside a compositor script, or in a general pass
in a viewport that has a compositor attached. Note that this is a reference only,
meaning that it does not change the render order. You must make sure that
the order is reasonable for what you are trying to achieve (for example, texture
pooling might cause the referenced texture to be overwritten by something else
by the time it is referenced).

The extra parameters for the content type are only required for this type:

The first is the name of the compositor being referenced. (Required)

The second is the name of the texture to reference in the compositor. (Re-
quired)

The third is the index of the texture to take, in case of an MRT. (Optional)

Format: content type <named|shadow|compositor> [<Referenced Compositor Name>]


[<Referenced Texture Name>] [<Referenced MRT Index>]

Default: content type named

Example: content type compositor DepthCompositor OutputTexture


Chapter 3: Scripts 53

tex coord set


Sets which texture coordinate set is to be used for this texture layer. A mesh can define
multiple sets of texture coordinates, this sets which one this material uses.

Format: tex coord set <set num>

Example: tex coord set 2

Default: tex coord set 0

tex address mode


Defines what happens when texture coordinates exceed 1.0 for this texture layer.You can
use the simple format to specify the addressing mode for all 3 potential texture coordinates
at once, or you can use the 2/3 parameter extended format to specify a different mode per
texture coordinate.

Simple Format: tex address mode <uvw mode>


Extended Format: tex address mode <u mode> <v mode> [<w mode>]
wrap Any value beyond 1.0 wraps back to 0.0. Texture is repeated.
clamp Values beyond 1.0 are clamped to 1.0. Texture ’streaks’ beyond 1.0 since last
line of pixels is used across the rest of the address space. Useful for textures
which need exact coverage from 0.0 to 1.0 without the ’fuzzy edge’ wrap gives
when combined with filtering.
mirror Texture flips every boundary, meaning texture is mirrored every 1.0 u or v
border Values outside the range [0.0, 1.0] are set to the border colour, you might also
set the [tex border colour], page 53 attribute too.

Default: tex address mode wrap

tex border colour


Sets the border colour of border texture address mode (see [tex address mode], page 53).

Format: tex border colour <red> <green> <blue> [<alpha>]


NB valid colour values are between 0.0 and 1.0.
Chapter 3: Scripts 54

Example: tex border colour 0.0 1.0 0.3

Default: tex border colour 0.0 0.0 0.0 1.0

filtering
Sets the type of texture filtering used when magnifying or minifying a texture. There are
2 formats to this attribute, the simple format where you simply specify the name of a
predefined set of filtering options, and the complex format, where you individually set the
minification, magnification, and mip filters yourself.

Simple Format
Format: filtering <none|bilinear|trilinear|anisotropic>
Default: filtering bilinear

With this format, you only need to provide a single parameter which is one of the following:
none No filtering or mipmapping is used. This is equivalent to the complex format
’filtering point point none’.
bilinear 2x2 box filtering is performed when magnifying or reducing a texture, and a
mipmap is picked from the list but no filtering is done between the levels of
the mipmaps. This is equivalent to the complex format ’filtering linear linear
point’.
trilinear 2x2 box filtering is performed when magnifying and reducing a texture, and
the closest 2 mipmaps are filtered together. This is equivalent to the complex
format ’filtering linear linear linear’.
anisotropic
This is the same as ’trilinear’, except the filtering algorithm takes account of
the slope of the triangle in relation to the camera rather than simply doing
a 2x2 pixel filter in all cases. This makes triangles at acute angles look less
fuzzy. Equivalent to the complex format ’filtering anisotropic anisotropic lin-
ear’. Note that in order for this to make any difference, you must also set the
[max anisotropy], page 55 attribute too.

Complex Format
Format: filtering <minification> <magnification> <mip>
Default: filtering linear linear point

This format gives you complete control over the minification, magnification, and mip filters.
Each parameter can be one of the following:
Chapter 3: Scripts 55

none Nothing - only a valid option for the ’mip’ filter , since this turns mipmapping
off completely. The lowest setting for min and mag is ’point’.
point Pick the closet pixel in min or mag modes. In mip mode, this picks the closet
matching mipmap.
linear Filter a 2x2 box of pixels around the closest one. In the ’mip’ filter this enables
filtering between mipmap levels.
anisotropic
Only valid for min and mag modes, makes the filter compensate for camera-
space slope of the triangles. Note that in order for this to make any difference,
you must also set the [max anisotropy], page 55 attribute too.

max anisotropy
Sets the maximum degree of anisotropy that the renderer will try to compensate for when
filtering textures. The degree of anisotropy is the ratio between the height of the texture
segment visible in a screen space region versus the width - so for example a floor plane,
which stretches on into the distance and thus the vertical texture coordinates change much
faster than the horizontal ones, has a higher anisotropy than a wall which is facing you head
on (which has an anisotropy of 1 if your line of sight is perfectly perpendicular to it). You
should set the max anisotropy value to something greater than 1 to begin compensating;
higher values can compensate for more acute angles. The maximum value is determined by
the hardware, but it is usually 8 or 16.

In order for this to be used, you have to set the minification and/or the magnification
[filtering], page 54 option on this texture to anisotropic.
Format: max anisotropy <value>
Default: max anisotropy 1

mipmap bias
Sets the bias value applied to the mipmapping calculation, thus allowing you to alter the
decision of which level of detail of the texture to use at any distance. The bias value is
applied after the regular distance calculation, and adjusts the mipmap level by 1 level for
each unit of bias. Negative bias values force larger mip levels to be used, positive bias values
force smaller mip levels to be used. The bias is a floating point value so you can use values
in between whole numbers for fine tuning.

In order for this option to be used, your hardware has to support mipmap biasing (exposed
through the render system capabilities), and your minification [filtering], page 54 has to be
set to point or linear.
Format: mipmap bias <value>
Default: mipmap bias 0

colour op
Determines how the colour of this texture layer is combined with the one below it (or the
lighting effect on the geometry if this is the first layer).
Chapter 3: Scripts 56

Format: colour op <replace|add|modulate|alpha blend>

This method is the simplest way to blend texture layers, because it requires only one
parameter, gives you the most common blending types, and automatically sets up 2 blending
methods: one for if single-pass multitexturing hardware is available, and another for if
it is not and the blending must be achieved through multiple rendering passes. It is,
however, quite limited and does not expose the more flexible multitexturing operations,
simply because these can’t be automatically supported in multipass fallback mode. If want
to use the fancier options, use [colour op ex], page 56, but you’ll either have to be sure that
enough multitexturing units will be available, or you should explicitly set a fallback using
[colour op multipass fallback], page 58.

replace Replace all colour with texture with no adjustment.


add Add colour components together.
modulate Multiply colour components together.
alpha blend
Blend based on texture alpha.

Default: colour op modulate

colour op ex
This is an extended version of the [colour op], page 55 attribute which allows extremely
detailed control over the blending applied between this and earlier layers. Multitexturing
hardware can apply more complex blending operations that multipass blending, but you
are limited to the number of texture units which are available in hardware.

Format: colour op ex <operation> <source1> <source2> [<manual factor>] [<man-


ual colour1>] [<manual colour2>]

Example colour op ex add signed src manual src current 0.5

See the IMPORTANT note below about the issues between multipass and multitexturing
that using this method can create. Texture colour operations determine how the final colour
of the surface appears when rendered. Texture units are used to combine colour values from
various sources (e.g. the diffuse colour of the surface from lighting calculations, combined
with the colour of the texture). This method allows you to specify the ’operation’ to be
used, i.e. the calculation such as adds or multiplies, and which values to use as arguments,
Chapter 3: Scripts 57

such as a fixed value or a value from a previous calculation.

Operation options
source1 Use source1 without modification
source2 Use source2 without modification
modulate Multiply source1 and source2 together.
modulate x2
Multiply source1 and source2 together, then by 2 (brightening).
modulate x4
Multiply source1 and source2 together, then by 4 (brightening).
add Add source1 and source2 together.
add signed
Add source1 and source2 then subtract 0.5.
add smooth
Add source1 and source2, subtract the product
subtract Subtract source2 from source1
blend diffuse alpha
Use interpolated alpha value from vertices to scale source1, then
add source2 scaled by (1-alpha).
blend texture alpha
As blend diffuse alpha but use alpha from texture
blend current alpha
As blend diffuse alpha but use current alpha from previous stages
(same as blend diffuse alpha for first layer)
blend manual
As blend diffuse alpha but use a constant manual alpha value spec-
ified in <manual>
dotproduct
The dot product of source1 and source2
blend diffuse colour
Use interpolated colour value from vertices to scale source1, then
add source2 scaled by (1-colour).
Source1 and source2 options
src current
The colour as built up from previous stages.
src texture
The colour derived from the texture assigned to this layer.
Chapter 3: Scripts 58

src diffuse The interpolated diffuse colour from the vertices (same as
’src current’ for first layer).
src specular
The interpolated specular colour from the vertices.
src manual
The manual colour specified at the end of the command.

For example ’modulate’ takes the colour results of the previous layer, and multiplies them
with the new texture being applied. Bear in mind that colours are RGB values from 0.0-1.0
so multiplying them together will result in values in the same range, ’tinted’ by the multiply.
Note however that a straight multiply normally has the effect of darkening the textures -
for this reason there are brightening operations like modulate x2. Note that because of the
limitations on some underlying APIs (Direct3D included) the ’texture’ argument can only
be used as the first argument, not the second.

Note that the last parameter is only required if you decide to pass a value manually into
the operation. Hence you only need to fill these in if you use the ’blend manual’ operation.

IMPORTANT: Ogre tries to use multitexturing hardware to blend texture layers to-
gether. However, if it runs out of texturing units (e.g. 2 of a GeForce2, 4 on a GeForce3) it
has to fall back on multipass rendering, i.e. rendering the same object multiple times with
different textures. This is both less efficient and there is a smaller range of blending oper-
ations which can be performed. For this reason, if you use this method you really should
set the colour op multipass fallback attribute to specify which effect you want to fall back
on if sufficient hardware is not available (the default is just ’modulate’ which is unlikely to
be what you want if you’re doing swanky blending here). If you wish to avoid having to do
this, use the simpler colour op attribute which allows less flexible blending options but sets
up the multipass fallback automatically, since it only allows operations which have direct
multipass equivalents.

Default: none (colour op modulate)

colour op multipass fallback


Sets the multipass fallback operation for this layer, if you used colour op ex and not enough
multitexturing hardware is available.

Format: colour op multipass fallback <src factor> <dest factor>


Chapter 3: Scripts 59

Example: colour op multipass fallback one one minus dest alpha

Because some of the effects you can create using colour op ex are only supported under
multitexturing hardware, if the hardware is lacking the system must fallback on multipass
rendering, which unfortunately doesn’t support as many effects. This attribute is for you
to specify the fallback operation which most suits you.

The parameters are the same as in the scene blend attribute; this is because multipass
rendering IS effectively scene blending, since each layer is rendered on top of the last using
the same mechanism as making an object transparent, it’s just being rendered in the same
place repeatedly to get the multitexture effect. If you use the simpler (and less flexible)
colour op attribute you don’t need to call this as the system sets up the fallback for you.

alpha op ex
Behaves in exactly the same away as [colour op ex], page 56 except that it determines
how alpha values are combined between texture layers rather than colour values.The only
difference is that the 2 manual colours at the end of colour op ex are just single floating-
point values in alpha op ex.

env map
Turns on/off texture coordinate effect that makes this layer an environment map.

Format: env map <off|spherical|planar|cubic reflection|cubic normal>

Environment maps make an object look reflective by using automatic texture coordinate
generation depending on the relationship between the objects vertices or normals and the
eye.

spherical A spherical environment map. Requires a single texture which is either a fish-
eye lens view of the reflected scene, or some other texture which looks good as a
spherical map (a texture of glossy highlights is popular especially in car sims).
This effect is based on the relationship between the eye direction and the vertex
normals of the object, so works best when there are a lot of gradually changing
normals, i.e. curved objects.
planar Similar to the spherical environment map, but the effect is based on the po-
sition of the vertices in the viewport rather than vertex normals. This effect
is therefore useful for planar geometry (where a spherical env map would not
look good because the normals are all the same) or objects without normals.
Chapter 3: Scripts 60

cubic reflection
A more advanced form of reflection mapping which uses a group of 6 textures
making up the inside of a cube, each of which is a view if the scene down each
axis. Works extremely well in all cases but has a higher technical requirement
from the card than spherical mapping. Requires that you bind a [cubic texture],
page 50 to this texture unit and use the ’combinedUVW’ option.
cubic normal
Generates 3D texture coordinates containing the camera space normal vector
from the normal information held in the vertex data. Again, full use of this
feature requires a [cubic texture], page 50 with the ’combinedUVW’ option.

Default: env map off

scroll
Sets a fixed scroll offset for the texture.

Format: scroll <x> <y>

This method offsets the texture in this layer by a fixed amount. Useful for small ad-
justments without altering texture coordinates in models. However if you wish to have an
animated scroll effect, see the [scroll anim], page 60 attribute.

scroll anim
Sets up an animated scroll for the texture layer. Useful for creating fixed-speed scrolling
effects on a texture layer (for varying scroll speeds, see [wave xform], page 61).

Format: scroll anim <xspeed> <yspeed>

rotate
Rotates a texture to a fixed angle. This attribute changes the rotational orientation of a
texture to a fixed angle, useful for fixed adjustments. If you wish to animate the rotation,
see [rotate anim], page 61.

Format: rotate <angle>


Chapter 3: Scripts 61

The parameter is a anti-clockwise angle in degrees.

rotate anim
Sets up an animated rotation effect of this layer. Useful for creating fixed-speed rotation
animations (for varying speeds, see [wave xform], page 61).

Format: rotate anim <revs per second>

The parameter is a number of anti-clockwise revolutions per second.

scale
Adjusts the scaling factor applied to this texture layer. Useful for adjusting the size of
textures without making changes to geometry. This is a fixed scaling factor, if you wish to
animate this see [wave xform], page 61.

Format: scale <x scale> <y scale>

Valid scale values are greater than 0, with a scale factor of 2 making the texture twice
as big in that dimension etc.

wave xform
Sets up a transformation animation based on a wave function. Useful for more advanced
texture layer transform effects. You can add multiple instances of this attribute to a single
texture layer if you wish.

Format: wave xform <xform type> <wave type> <base> <frequency> <phase> <ampli-
tude>

Example: wave xform scale x sine 1.0 0.2 0.0 5.0

xform type
scroll x Animate the x scroll value
Chapter 3: Scripts 62

scroll y Animate the y scroll value


rotate Animate the rotate value
scale x Animate the x scale value
scale y Animate the y scale value
wave type
sine A typical sine wave which smoothly loops between min and max
values
triangle An angled wave which increases & decreases at constant speed,
changing instantly at the extremes
square Max for half the wavelength, min for the rest with instant transition
between
sawtooth Gradual steady increase from min to max over the period with an
instant return to min at the end.
inverse sawtooth
Gradual steady decrease from max to min over the period, with an
instant return to max at the end.
base The base value, the minimum if amplitude > 0, the maximum if amplitude < 0
frequency The number of wave iterations per second, i.e. speed
phase Offset of the wave start
amplitude The size of the wave

The range of the output of the wave will be base, base+amplitude. So the example above
scales the texture in the x direction between 1 (normal size) and 5 along a sine wave at one
cycle every 5 second (0.2 waves per second).

transform
This attribute allows you to specify a static 4x4 transformation matrix for the texture unit,
thus replacing the individual scroll, rotate and scale attributes mentioned above.

Format: transform m00 m01 m02 m03 m10 m11 m12 m13 m20 m21 m22 m23 m30 m31
m32 m33

The indexes of the 4x4 matrix value above are expressed as m<row><col>.
Chapter 3: Scripts 63

3.1.4 Declaring Vertex/Geometry/Fragment Programs


In order to use a vertex, geometry or fragment program in your materials (See Section 3.1.9
[Using Vertex/Geometry/Fragment Programs in a Pass], page 79), you first have to define
them. A single program definition can be used by any number of materials, the only
prerequisite is that a program must be defined before being referenced in the pass section
of a material.

The definition of a program can either be embedded in the .material script itself (in
which case it must precede any references to it in the script), or if you wish to use the same
program across multiple .material files, you can define it in an external .program script.
You define the program in exactly the same way whether you use a .program script or a
.material script, the only difference is that all .program scripts are guaranteed to have been
parsed before all .material scripts, so you can guarantee that your program has been defined
before any .material script that might use it. Just like .material scripts, .program scripts
will be read from any location which is on your resource path, and you can define many
programs in a single script.

Vertex, geometry and fragment programs can be low-level (i.e. assembler code written
to the specification of a given low level syntax such as vs 1 1 or arbfp1) or high-level such
as DirectX9 HLSL, Open GL Shader Language, or nVidia’s Cg language (See [High-level
Programs], page 67). High level languages give you a number of advantages, such as being
able to write more intuitive code, and possibly being able to target multiple architectures in
a single program (for example, the same Cg program might be able to be used in both D3D
and GL, whilst the equivalent low-level programs would require separate techniques, each
targeting a different API). High-level programs also allow you to use named parameters
instead of simply indexed ones, although parameters are not defined here, they are used in
the Pass.

Here is an example of a definition of a low-level vertex program:


vertex_program myVertexProgram asm
{
source myVertexProgram.asm
syntax vs_1_1
}
As you can see, that’s very simple, and defining a fragment or geometry program is
exactly the same, just with vertex program replaced with fragment program or geome-
try program, respectively. You give the program a name in the header, followed by the
word ’asm’ to indicate that this is a low-level program. Inside the braces, you specify where
the source is going to come from (and this is loaded from any of the resource locations as
with other media), and also indicate the syntax being used. You might wonder why the
syntax specification is required when many of the assembler syntaxes have a header iden-
tifying them anyway - well the reason is that the engine needs to know what syntax the
Chapter 3: Scripts 64

program is in before reading it, because during compilation of the material, we want to skip
programs which use an unsupportable syntax quickly, without loading the program first.

The current supported syntaxes are:


vs 1 1 This is one of the DirectX vertex shader assembler syntaxes.
Supported on cards from: ATI Radeon 8500, nVidia GeForce 3

vs 2 0 Another one of the DirectX vertex shader assembler syntaxes.


Supported on cards from: ATI Radeon 9600, nVidia GeForce FX 5 series

vs 2 x Another one of the DirectX vertex shader assembler syntaxes.


Supported on cards from: ATI Radeon X series, nVidia GeForce FX 6 series

vs 3 0 Another one of the DirectX vertex shader assembler syntaxes.


Supported on cards from: ATI Radeon HD 2000+, nVidia GeForce FX 6 series
arbvp1 This is the OpenGL standard assembler format for vertex programs. It’s
roughly equivalent to DirectX vs 1 1.
vp20 This is an nVidia-specific OpenGL vertex shader syntax which is a superset of
vs 1.1. ATI Radeon HD 2000+ also supports it.
vp30 Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs
2.0, which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon
HD 2000+ also supports it.
vp40 Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs
3.0, which is supported on nVidia GeForce FX 6 series and higher.
ps 1 1, ps 1 2, ps 1 3
DirectX pixel shader (ie fragment program) assembler syntax.
Supported on cards from: ATI Radeon 8500, nVidia GeForce 3
NOTE: for ATI 8500, 9000, 9100, 9200 hardware, this profile can also be used
in OpenGL. The ATI 8500 to 9200 do not support arbfp1 but do support atifs
extension in OpenGL which is very similar in function to ps 1 4 in DirectX.
Ogre has a built in ps 1 x to atifs compiler that is automatically invoked when
ps 1 x is used in OpenGL on ATI hardware.
ps 1 4 DirectX pixel shader (ie fragment program) assembler syntax.
Supported on cards from: ATI Radeon 8500, nVidia GeForce FX 5 series
NOTE: for ATI 8500, 9000, 9100, 9200 hardware, this profile can also be used
in OpenGL. The ATI 8500 to 9200 do not support arbfp1 but do support atifs
extension in OpenGL which is very similar in function to ps 1 4 in DirectX.
Ogre has a built in ps 1 x to atifs compiler that is automatically invoked when
ps 1 x is used in OpenGL on ATI hardware.
ps 2 0 DirectX pixel shader (ie fragment program) assembler syntax.
Supported cards: ATI Radeon 9600, nVidia GeForce FX 5 series
Chapter 3: Scripts 65

ps 2 x DirectX pixel shader (ie fragment program) assembler syntax. This is basically
ps 2 0 with a higher number of instructions.
Supported cards: ATI Radeon X series, nVidia GeForce FX 6 series

ps 3 0 DirectX pixel shader (ie fragment program) assembler syntax.


Supported cards: ATI Radeon HD 2000+, nVidia GeForce FX6 series

ps 3 x DirectX pixel shader (ie fragment program) assembler syntax.


Supported cards: nVidia GeForce FX7 series

arbfp1 This is the OpenGL standard assembler format for fragment programs. It’s
roughly equivalent to ps 2 0, which means that not all cards that support basic
pixel shaders under DirectX support arbfp1 (for example neither the GeForce3
or GeForce4 support arbfp1, but they do support ps 1 1).
fp20 This is an nVidia-specific OpenGL fragment syntax which is a superset
of ps 1.3. It allows you to use the ’nvparse’ format for basic fragment
programs. It actually uses NV texture shader and NV register combiners
to provide functionality equivalent to DirectX’s ps 1 1 under GL, but only
for nVidia cards. However, since ATI cards adopted arbfp1 a little earlier
than nVidia, it is mainly nVidia cards like the GeForce3 and GeForce4 that
this will be useful for. You can find more information about nvparse at
http://developer.nvidia.com/object/nvparse.html.
fp30 Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps
2.0, which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon
HD 2000+ also supports it.
fp40 Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps
3.0, which is supported on nVidia GeForce FX 6 series and higher.
gpu gp, gp4 gp
An nVidia-specific OpenGL geometry shader syntax.
Supported cards: nVidia GeForce FX8 series

You can get a definitive list of the syntaxes supported by the current card by calling
GpuProgramManager::getSingleton().getSupportedSyntax().

Specifying Named Constants for Assembler Shaders


Assembler shaders don’t have named constants (also called uniform parameters) because
the language does not support them - however if you for example decided to precompile
your shaders from a high-level language down to assembler for performance or obscurity, you
might still want to use the named parameters. Well, you actually can - GpuNamedConstants
which contains the named parameter mappings has a ’save’ method which you can use to
write this data to disk, where you can reference it later using the manual named constants
directive inside your assembler program declaration, e.g.
Chapter 3: Scripts 66

vertex_program myVertexProgram asm


{
source myVertexProgram.asm
syntax vs_1_1
manual_named_constants myVertexProgram.constants
}
In this case myVertexProgram.constants has been created by calling
highLevelGpuProgram->getNamedConstants().save("myVertexProgram.constants");
sometime earlier as preparation, from the original high-level program. Once you’ve used
this directive, you can use named parameters here even though the assembler program
itself has no knowledge of them.

Default Program Parameters


While defining a vertex, geometry or fragment program, you can also specify the default
parameters to be used for materials which use it, unless they specifically override them.
You do this by including a nested ’default params’ section, like so:
vertex_program Ogre/CelShadingVP cg
{
source Example_CelShading.cg
entry_point main_vp
profiles vs_1_1 arbvp1

default_params
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named_auto worldViewProj worldviewproj_matrix
param_named shininess float 10
}
}
The syntax of the parameter definition is exactly the same as when you define parameters
when using programs, See [Program Parameter Specification], page 80. Defining default
parameters allows you to avoid rebinding common parameters repeatedly (clearly in the
above example, all but ’shininess’ are unlikely to change between uses of the program)
which makes your material declarations shorter.

Declaring Shared Parameters


Often, not every parameter you want to pass to a shader is unique to that program, and
perhaps you want to give the same value to a number of different programs, and a number
of different materials using that program. Shared parameter sets allow you to define a
’holding area’ for shared parameters that can then be referenced when you need them in
particular shaders, while keeping the definition of that value in one place. To define a set
of shared parameters, you do this:
shared_params YourSharedParamsName
{
shared_param_named mySharedParam1 float4 0.1 0.2 0.3 0.4
Chapter 3: Scripts 67

...
}
As you can see, you need to use the keyword ’shared params’ and follow it with the
name that you will use to identify these shared parameters. Inside the curly braces, you
can define one parameter per line, in a way which is very similar to the [param named],
page 92 syntax. The definition of these lines is:

Format: shared param name <param name> <param type> [<[array size]>] [<ini-
tial values>]

The param name must be unique within the set, and the param type can be any one of
float, float2, float3, float4, int, int2, int3, int4, matrix2x2, matrix2x3, matrix2x4, matrix3x2,
matrix3x3, matrix3x4, matrix4x2, matrix4x3 and matrix4x4. The array size option allows
you to define arrays of param type should you wish, and if present must be a number
enclosed in square brackets (and note, must be separated from the param type with white-
space). If you wish, you can also initialise the parameters by providing a list of values.

Once you have defined the shared parameters, you can reference them inside
default params and params blocks using [shared params ref], page 93. You can also
obtain a reference to them in your code via GpuProgramManager::getSharedParameters,
and update the values for all instances using them.

High-level Programs
Support for high level vertex and fragment programs is provided through plugins; this is to
make sure that an application using OGRE can use as little or as much of the high-level
program functionality as they like. OGRE currently supports 3 high-level program types,
Cg (Section 3.1.5 [Cg], page 69) (an API- and card-independent, high-level language which
lets you write programs for both OpenGL and DirectX for lots of cards), DirectX 9 High-
Level Shader Language (Section 3.1.6 [HLSL], page 70), and OpenGL Shader Language
(Section 3.1.7 [GLSL], page 71). HLSL can only be used with the DirectX rendersystem,
and GLSL can only be used with the GL rendersystem. Cg can be used with both, although
experience has shown that more advanced programs, particularly fragment programs which
perform a lot of texture fetches, can produce better code in the rendersystem-specific shader
language.

One way to support both HLSL and GLSL is to include separate techniques in the
material script, each one referencing separate programs. However, if the programs are
basically the same, with the same parameters, and the techniques are complex this can bloat
your material scripts with duplication fairly quickly. Instead, if the only difference is the
language of the vertex & fragment program you can use OGRE’s Section 3.1.8 [Unified High-
level Programs], page 76 to automatically pick a program suitable for your rendersystem
whilst using a single technique.
Chapter 3: Scripts 68

Skeletal Animation in Vertex Programs


You can implement skeletal animation in hardware by writing a vertex program which
uses the per-vertex blending indices and blending weights, together with an array of world
matrices (which will be provided for you by Ogre if you bind the automatic parameter
’world matrix array 3x4’). However, you need to communicate this support to Ogre so it
does not perform skeletal animation in software for you. You do this by adding the following
attribute to your vertex program definition:
includes_skeletal_animation true
When you do this, any skeletally animated entity which uses this material will forgo the
usual animation blend and will expect the vertex program to do it, for both vertex positions
and normals. Note that ALL submeshes must be assigned a material which implements
this, and that if you combine skeletal animation with vertex animation (See Chapter 8
[Animation], page 197) then all techniques must be hardware accelerated for any to be.

Morph Animation in Vertex Programs


You can implement morph animation in hardware by writing a vertex program which lin-
early blends between the first and second position keyframes passed as positions and the
first free texture coordinate set, and by binding the animation parametric value to a pa-
rameter (which tells you how far to interpolate between the two). However, you need to
communicate this support to Ogre so it does not perform morph animation in software for
you. You do this by adding the following attribute to your vertex program definition:
includes_morph_animation true
When you do this, any skeletally animated entity which uses this material will forgo the
usual software morph and will expect the vertex program to do it. Note that if your model
includes both skeletal animation and morph animation, they must both be implemented in
the vertex program if either is to be hardware acceleration. Note that ALL submeshes must
be assigned a material which implements this, and that if you combine skeletal animation
with vertex animation (See Chapter 8 [Animation], page 197) then all techniques must be
hardware accelerated for any to be.

Pose Animation in Vertex Programs


You can implement pose animation (blending between multiple poses based on weight) in
a vertex program by pulling in the original vertex data (bound to position), and as many
pose offset buffers as you’ve defined in your ’includes pose animation’ declaration, which
will be in the first free texture unit upwards. You must also use the animation parametric
parameter to define the starting point of the constants which will contain the pose weights;
they will start at the parameter you define and fill ’n’ constants, where ’n’ is the max
number of poses this shader can blend, i.e. the parameter to includes pose animation.
includes_pose_animation 4
Note that ALL submeshes must be assigned a material which implements this, and
that if you combine skeletal animation with vertex animation (See Chapter 8 [Animation],
page 197) then all techniques must be hardware accelerated for any to be.
Chapter 3: Scripts 69

Vertex texture fetching in vertex programs


If your vertex program makes use of Section 3.1.10 [Vertex Texture Fetch], page 95, you
should declare that with the ’uses vertex texture fetch’ directive. This is enough to tell
Ogre that your program uses this feature and that hardware support for it should be checked.
uses_vertex_texture_fetch true

Adjacency information in Geometry Programs


Some geometry programs require adjacency information from the geometry. It means that
a geometry shader doesn’t only get the information of the primitive it operates on, it also
has access to its neighbors (in the case of lines or triangles). This directive will tell Ogre to
send the information to the geometry shader.
uses_adjacency_information true

Vertex Programs With Shadows


When using shadows (See Chapter 7 [Shadows], page 180), the use of vertex programs can
add some additional complexities, because Ogre can only automatically deal with everything
when using the fixed-function pipeline. If you use vertex programs, and you are also using
shadows, you may need to make some adjustments.

If you use stencil shadows, then any vertex programs which do vertex deformation can
be a problem, because stencil shadows are calculated on the CPU, which does not have
access to the modified vertices. If the vertex program is doing standard skeletal animation,
this is ok (see section above) because Ogre knows how to replicate the effect in software,
but any other vertex deformation cannot be replicated, and you will either have to accept
that the shadow will not reflect this deformation, or you should turn off shadows for that
object.

If you use texture shadows, then vertex deformation is acceptable; however, when ren-
dering the object into a shadow texture (the shadow caster pass), the shadow has to be
rendered in a solid colour (linked to the ambient colour for modulative shadows, black for
additive shadows). You must therefore provide an alternative vertex program, so Ogre pro-
vides you with a way of specifying one to use when rendering the caster, See [Shadows and
Vertex Programs], page 93.

3.1.5 Cg programs
In order to define Cg programs, you have to have to load Plugin CgProgramManager.so/.dll
at startup, either through plugins.cfg or through your own plugin loading code. They are
very easy to define:
fragment_program myCgFragmentProgram cg
{
source myCgFragmentProgram.cg
entry_point main
profiles ps_2_0 arbfp1
Chapter 3: Scripts 70

}
There are a few differences between this and the assembler program - to begin with, we
declare that the fragment program is of type ’cg’ rather than ’asm’, which indicates that
it’s a high-level program using Cg. The ’source’ parameter is the same, except this time
it’s referencing a Cg source file instead of a file of assembler.

Here is where things start to change. Firstly, we need to define an ’entry point’, which is
the name of a function in the Cg program which will be the first one called as part of the
fragment program. Unlike assembler programs, which just run top-to-bottom, Cg programs
can include multiple functions and as such you must specify the one which start the ball
rolling.

Next, instead of a fixed ’syntax’ parameter, you specify one or more ’profiles’; profiles are
how Cg compiles a program down to the low-level assembler. The profiles have the same
names as the assembler syntax codes mentioned above; the main difference is that you
can list more than one, thus allowing the program to be compiled down to more low-level
syntaxes so you can write a single high-level program which runs on both D3D and GL. You
are advised to just enter the simplest profiles under which your programs can be compiled
in order to give it the maximum compatibility. The ordering also matters; if a card supports
more than one syntax then the one listed first will be used.

Lastly, there is a final option called ’compile arguments’, where you can specify argu-
ments exactly as you would to the cgc command-line compiler, should you wish to.

3.1.6 DirectX9 HLSL


DirectX9 HLSL has a very similar language syntax to Cg but is tied to the DirectX API.
The only benefit over Cg is that it only requires the DirectX 9 render system plugin, not
any additional plugins. Declaring a DirectX9 HLSL program is very similar to Cg. Here’s
an example:
vertex_program myHLSLVertexProgram hlsl
{
source myHLSLVertexProgram.txt
entry_point main
target vs_2_0
}
As you can see, the main syntax is almost identical, except that instead of ’profiles’ with
a list of assembler formats, you have a ’target’ parameter which allows a single assembler
target to be specified - obviously this has to be a DirectX assembler format syntax code.

Important Matrix Ordering Note: One thing to bear in mind is that HLSL allows you
to use 2 different ways to multiply a vector by a matrix - mul(v,m) or mul(m,v). The
only difference between them is that the matrix is effectively transposed. You should use
mul(m,v) with the matrices passed in from Ogre - this agrees with the shaders produced
from tools like RenderMonkey, and is consistent with Cg too, but disagrees with the Dx9
Chapter 3: Scripts 71

SDK and FX Composer which use mul(v,m) - you will have to switch the parameters to
mul() in those shaders.

Note that if you use the float3x4 / matrix3x4 type in your shader, bound to an OGRE
auto-definition (such as bone matrices) you should use the column major matrices = false
option (discussed below) in your program definition. This is because OGRE passes float3x4
as row-major to save constant space (3 float4’s rather than 4 float4’s with only the top 3
values used) and this tells OGRE to pass all matrices like this, so that you can use mul(m,v)
consistently for all calculations. OGRE will also to tell the shader to compile in row-major
form (you don’t have to set the /Zpr compile option or #pragma pack(row-major) option,
OGRE does this for you). Note that passing bones in float4x3 form is not supported by
OGRE, but you don’t need it given the above.

Advanced options

preprocessor defines <defines>


This allows you to define symbols which can be used inside the HLSL shader
code to alter the behaviour (through #ifdef or #if clauses). Definitions are
separated by ’;’ or ’,’ and may optionally have a ’=’ operator within them to
specify a definition value. Those without an ’=’ will implicitly have a definition
of 1.
column major matrices <true|false>
The default for this option is ’true’ so that OGRE passes matrices auto-bound
matrices in a form where mul(m,v) works. Setting this option to false does
2 things - it transpose auto-bound 4x4 matrices and also sets the /Zpr
(row-major) option on the shader compilation. This means you can still use
mul(m,v), but the matrix layout is row-major instead. This is only useful if
you need to use bone matrices (float3x4) in a shader since it saves a float4
constant for every bone involved.
optimisation level <opt>
Set the optimisation level, which can be one of ’default’, ’none’, ’0’, ’1’, ’2’, or
’3’. This corresponds to the /O parameter of fxc.exe, except that in ’default’
mode, optimisation is disabled in debug mode and set to 1 in release mode
(fxc.exe uses 1 all the time). Unsurprisingly the default value is ’default’. You
may want to change this if you want to tweak the optimisation, for example
if your shader gets so complex that it will not longer compile without some
minimum level of optimisation.

3.1.7 OpenGL GLSL


OpenGL GLSL has a similar language syntax to HLSL but is tied to the OpenGL API. The
are a few benefits over Cg in that it only requires the OpenGL render system plugin, not
any additional plugins. Declaring a OpenGL GLSL program is similar to Cg but simpler.
Here’s an example:
Chapter 3: Scripts 72

vertex_program myGLSLVertexProgram glsl


{
source myGLSLVertexProgram.txt
}
In GLSL, no entry point needs to be defined since it is always ’main()’ and there is no
target definition since GLSL source is compiled into native GPU code and not intermediate
assembly.

GLSL supports the use of modular shaders. This means you can write GLSL external
functions that can be used in multiple shaders.
vertex_program myExternalGLSLFunction1 glsl
{
source myExternalGLSLfunction1.txt
}

vertex_program myExternalGLSLFunction2 glsl


{
source myExternalGLSLfunction2.txt
}

vertex_program myGLSLVertexProgram1 glsl


{
source myGLSLfunction.txt
attach myExternalGLSLFunction1 myExternalGLSLFunction2
}

vertex_program myGLSLVertexProgram2 glsl


{
source myGLSLfunction.txt
attach myExternalGLSLFunction1
}
External GLSL functions are attached to the program that needs them by using ’attach’
and including the names of all external programs required on the same line separated by
spaces. This can be done for both vertex and fragment programs.

GLSL Texture Samplers


To pass texture unit index values from the material script to texture samplers in glsl use
’int’ type named parameters. See the example below:

excerpt from GLSL example.frag source:


varying vec2 UV;
uniform sampler2D diffuseMap;

void main(void)
Chapter 3: Scripts 73

{
gl_FragColor = texture2D(diffuseMap, UV);
}
In material script:
fragment_program myFragmentShader glsl
{
source example.frag
}

material exampleGLSLTexturing
{
technique
{
pass
{
fragment_program_ref myFragmentShader
{
param_named diffuseMap int 0
}

texture_unit
{
texture myTexture.jpg 2d
}
}
}
}
An index value of 0 refers to the first texture unit in the pass, an index value of 1 refers
to the second unit in the pass and so on.

Matrix parameters
Here are some examples of passing matrices to GLSL mat2, mat3, mat4 uniforms:
material exampleGLSLmatrixUniforms
{
technique matrix_passing
{
pass examples
{
vertex_program_ref myVertexShader
{
// mat4 uniform
param_named OcclusionMatrix matrix4x4 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0
// or
param_named ViewMatrix float16 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0
Chapter 3: Scripts 74

// mat3
param_named TextRotMatrix float9 1 0 0 0 1 0 0 0 1
}

fragment_program_ref myFragmentShader
{
// mat2 uniform
param_named skewMatrix float4 0.5 0 -0.5 1.0
}

}
}
}

Accessing OpenGL states in GLSL


GLSL can access most of the GL states directly so you do not need to pass these states
through [param named auto], page 93 in the material script. This includes lights, material
state, and all the matrices used in the openGL state i.e. model view matrix, worldview
projection matrix etc.

Binding vertex attributes


GLSL natively supports automatic binding of the most common incoming per-vertex at-
tributes (e.g. gl Vertex, gl Normal, gl MultiTexCoord0 etc). However, there are some
which are not automatically bound, which must be declared in the shader using the ’at-
tribute <type> <name>’ syntax, and the vertex data bound to it by Ogre.

In addition to the built in attributes described in section 7.3 of the GLSL manual,
Ogre supports a number of automatically bound custom vertex attributes. There are some
drivers that do not behave correctly when mixing built-in vertex attributes like gl Normal
and custom vertex attributes, so for maximum compatibility you may well wish to use all
custom attributes in shaders where you need at least one (e.g. for skeletal animation).
vertex Binds VES POSITION, declare as ’attribute vec4 vertex;’.
normal Binds VES NORMAL, declare as ’attribute vec3 normal;’.
colour Binds VES DIFFUSE, declare as ’attribute vec4 colour;’.
secondary colour
Binds VES SPECULAR, declare as ’attribute vec4 secondary colour;’.
uv0 - uv7 Binds VES TEXTURE COORDINATES, declare as ’attribute vec4 uv0;’. Note
that uv6 and uv7 share attributes with tangent and binormal respectively so
cannot both be present.
tangent Binds VES TANGENT, declare as ’attribute vec3 tangent;’.
Chapter 3: Scripts 75

binormal Binds VES BINORMAL, declare as ’attribute vec3 binormal;’.


blendIndices
Binds VES BLEND INDICES, declare as ’attribute vec4 blendIndices;’.
blendWeights
Binds VES BLEND WEIGHTS, declare as ’attribute vec4 blendWeights;’.

Preprocessor definitions
GLSL supports using preprocessor definitions in your code - some are defined by the imple-
mentation, but you can also define your own, say in order to use the same source code for
a few different variants of the same technique. In order to use this feature, include prepro-
cessor conditions in your GLSL code, of the kind #ifdef SYMBOL, #if SYMBOL==2 etc.
Then in your program definition, use the ’preprocessor defines’ option, following it with a
string if definitions. Definitions are separated by ’;’ or ’,’ and may optionally have a ’=’
operator within them to specify a definition value. Those without an ’=’ will implicitly
have a definition of 1. For example:

// in your GLSL

#ifdef CLEVERTECHNIQUE
// some clever stuff here
#else
// normal technique
#endif

#if NUM_THINGS==2
// Some specific code
#else
// something else
#endif

// in your program definition


preprocessor_defines CLEVERTECHNIQUE,NUMTHINGS=2
This way you can use the same source code but still include small variations, each one
defined as a different Ogre program name but based on the same source code.

GLSL Geometry shader specification


GLSL allows the same shader to run on different types of geometry primitives. In order to
properly link the shaders together, you have to specify which primitives it will receive as
input, which primitives it will emit and how many vertices a single run of the shader can
generate. The GLSL geometry program definition requires three additional parameters :
input operation type
The operation type of the geometry that the shader will receive. Can
be ’point list’, ’line list’, ’line strip’, ’triangle list’, ’triangle strip’ or
’triangle fan’.
Chapter 3: Scripts 76

output operation type


The operation type of the geometry that the shader will emit. Can be
’point list’, ’line strip’ or ’triangle strip’.
max output vertices
The maximum number of vertices that the shader can emit. There is an upper
limit for this value, it is exposed in the render system capabilities.
For example:
geometry_program Ogre/GPTest/Swizzle_GP_GLSL glsl
{
source SwizzleGP.glsl
input_operation_type triangle_list
output_operation_type line_strip
max_output_vertices 6
}

3.1.8 Unified High-level Programs


As mentioned above, it can often be useful to write both HLSL and GLSL programs to
specifically target each platform, but if you do this via multiple material techniques this
can cause a bloated material definition when the only difference is the program language.
Well, there is another option. You can ’wrap’ multiple programs in a ’unified’ program
definition, which will automatically choose one of a series of ’delegate’ programs depending
on the rendersystem and hardware support.
vertex_program myVertexProgram unified
{
delegate realProgram1
delegate realProgram2
... etc
}
This works for both vertex and fragment programs, and you can list as many delegates
as you like - the first one to be supported by the current rendersystem & hardware will
be used as the real program. This is almost like a mini-technique system, but for a single
program and with a much tighter purpose. You can only use this where the programs take
all the same inputs, particularly textures and other pass / sampler state. Where the only
difference between the programs is the language (or possibly the target in HLSL - you can
include multiple HLSL programs with different targets in a single unified program too if you
want, or indeed any number of other high-level programs), this can become a very powerful
feature. For example, without this feature here’s how you’d have to define a programmable
material which supported HLSL and GLSL:
vertex_program myVertexProgramHLSL hlsl
{
source prog.hlsl
entry_point main_vp
target vs_2_0
}
fragment_program myFragmentProgramHLSL hlsl
Chapter 3: Scripts 77

{
source prog.hlsl
entry_point main_fp
target ps_2_0
}
vertex_program myVertexProgramGLSL glsl
{
source prog.vert
}
fragment_program myFragmentProgramGLSL glsl
{
source prog.frag
default_params
{
param_named tex int 0
}
}
material SupportHLSLandGLSLwithoutUnified
{
// HLSL technique
technique
{
pass
{
vertex_program_ref myVertexProgramHLSL
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
}
fragment_program_ref myFragmentProgramHLSL
{
}
}
}
// GLSL technique
technique
{
pass
{
vertex_program_ref myVertexProgramHLSL
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
Chapter 3: Scripts 78

}
fragment_program_ref myFragmentProgramHLSL
{
}
}
}
}
And that’s a really small example. Everything you added to the HLSL technique, you’d
have to duplicate in the GLSL technique too. So instead, here’s how you’d do it with unified
program definitions:
vertex_program myVertexProgramHLSL hlsl
{
source prog.hlsl
entry_point main_vp
target vs_2_0
}
fragment_program myFragmentProgramHLSL hlsl
{
source prog.hlsl
entry_point main_fp
target ps_2_0
}
vertex_program myVertexProgramGLSL glsl
{
source prog.vert
}
fragment_program myFragmentProgramGLSL glsl
{
source prog.frag
default_params
{
param_named tex int 0
}
}
// Unified definition
vertex_program myVertexProgram unified
{
delegate myVertexProgramGLSL
delegate myVertexProgramHLSL
}
fragment_program myFragmentProgram unified
{
delegate myFragmentProgramGLSL
delegate myFragmentProgramHLSL
}
material SupportHLSLandGLSLwithUnified
Chapter 3: Scripts 79

{
// HLSL technique
technique
{
pass
{
vertex_program_ref myVertexProgram
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
}
fragment_program_ref myFragmentProgram
{
}
}
}
}
At runtime, when myVertexProgram or myFragmentProgram are used, OGRE automat-
ically picks a real program to delegate to based on what’s supported on the current hardware
/ rendersystem. If none of the delegates are supported, the entire technique referencing the
unified program is marked as unsupported and the next technique in the material is checked
fro fallback, just like normal. As your materials get larger, and you find you need to support
HLSL and GLSL specifically (or need to write multiple interface-compatible versions of a
program for whatever other reason), unified programs can really help reduce duplication.

3.1.9 Using Vertex/Geometry/Fragment Programs in a Pass


Within a pass section of a material script, you can reference a vertex, geometry and / or a
fragment program which is been defined in a .program script (See Section 3.1.4 [Declaring
Vertex/Geometry/Fragment Programs], page 63). The programs are defined separately
from the usage of them in the pass, since the programs are very likely to be reused between
many separate materials, probably across many different .material scripts, so this approach
lets you define the program only once and use it many times.

As well as naming the program in question, you can also provide parameters to it. Here’s
a simple example:
vertex_program_ref myVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed 4 float4 10.0 0 0 0
}
In this example, we bind a vertex program called ’myVertexProgram’ (which will be
defined elsewhere) to the pass, and give it 2 parameters, one is an ’auto’ parameter, meaning
we do not have to supply a value as such, just a recognised code (in this case it’s the
Chapter 3: Scripts 80

world/view/projection matrix which is kept up to date automatically by Ogre). The second


parameter is a manually specified parameter, a 4-element float. The indexes are described
later.

The syntax of the link to a vertex program and a fragment or geometry program are
identical, the only difference is that ’fragment program ref’ and ’geometry program ref’
are used respectively instead of ’vertex program ref’.

For many situations vertex, geometry and fragment programs are associated with each
other in a pass but this is not cast in stone. You could have a vertex program that can
be used by several different fragment programs. Another situation that arises is that you
can mix fixed pipeline and programmable pipeline (shaders) together. You could use the
non-programable vertex fixed function pipeline and then provide a fragment program ref
in a pass i.e. there would be no vertex program ref section in the pass. The fragment
program referenced in the pass must meet the requirements as defined in the related API
in order to read from the outputs of the vertex fixed pipeline. You could also just have a
vertex program that outputs to the fragment fixed function pipeline.

The requirements to read from or write to the fixed function pipeline are similar between
rendering API’s (DirectX and OpenGL) but how its actually done in each type of shader
(vertex, geometry or fragment) depends on the shader language. For HLSL (DirectX
API) and associated asm consult MSDN at http://msdn.microsoft.com/library/.
For GLSL (OpenGL), consult section 7.6 of the GLSL spec 1.1 available at
http://developer.3dlabs.com/documents/index.htm. The built in varying variables
provided in GLSL allow your program to read/write to the fixed function pipeline varyings.
For Cg consult the Language Profiles section in CgUsersManual.pdf that comes with the
Cg Toolkit available at http://developer.nvidia.com/object/cg_toolkit.html. For
HLSL and Cg its the varying bindings that allow your shader programs to read/write to
the fixed function pipeline varyings.

Parameter specification
Parameters can be specified using one of 4 commands as shown below. The same syntax is
used whether you are defining a parameter just for this particular use of the program, or
when specifying the [Default Program Parameters], page 66. Parameters set in the specific
use of the program override the defaults.
• [param indexed], page 80
• [param indexed auto], page 81
• [param named], page 92
• [param named auto], page 93
• [shared params ref], page 93
Chapter 3: Scripts 81

param indexed
This command sets the value of an indexed parameter.

format: param indexed <index> <type> <value>

example: param indexed 0 float4 10.0 0 0 0

The ’index’ is simply a number representing the position in the parameter list which
the value should be written, and you should derive this from your program definition. The
index is relative to the way constants are stored on the card, which is in 4-element blocks.
For example if you defined a float4 parameter at index 0, the next index would be 1. If you
defined a matrix4x4 at index 0, the next usable index would be 4, since a 4x4 matrix takes
up 4 indexes.

The value of ’type’ can be float4, matrix4x4, float<n>, int4, int<n>. Note that ’int’
parameters are only available on some more advanced program syntaxes, check the D3D or
GL vertex / fragment program documentation for full details. Typically the most useful
ones will be float4 and matrix4x4. Note that if you use a type which is not a multiple of
4, then the remaining values up to the multiple of 4 will be filled with zeroes for you (since
GPUs always use banks of 4 floats per constant even if only one is used).

’value’ is simply a space or tab-delimited list of values which can be converted into the
type you have specified.

param indexed auto


This command tells Ogre to automatically update a given parameter with a derived value.
This frees you from writing code to update program parameters every frame when they are
always changing.

format: param indexed auto <index> <value code> <extra params>

example: param indexed auto 0 worldviewproj matrix

’index’ has the same meaning as [param indexed], page 80; note this time you do not
have to specify the size of the parameter because the engine knows this already. In the
example, the world/view/projection matrix is being used so this is implicitly a matrix4x4.
Chapter 3: Scripts 82

’value code’ is one of a list of recognised values:

world matrix
The current world matrix.
inverse world matrix
The inverse of the current world matrix.
transpose world matrix
The transpose of the world matrix
inverse transpose world matrix
The inverse transpose of the world matrix
world matrix array 3x4
An array of world matrices, each represented as only a 3x4 matrix (3 rows
of 4columns) usually for doing hardware skinning. You should make enough
entries available in your vertex program for the number of bones in use, ie an
array of numBones*3 float4’s.
view matrix
The current view matrix.
inverse view matrix
The inverse of the current view matrix.
transpose view matrix
The transpose of the view matrix
inverse transpose view matrix
The inverse transpose of the view matrix
projection matrix
The current projection matrix.
inverse projection matrix
The inverse of the projection matrix
transpose projection matrix
The transpose of the projection matrix
inverse transpose projection matrix
The inverse transpose of the projection matrix
worldview matrix
The current world and view matrices concatenated.
inverse worldview matrix
The inverse of the current concatenated world and view matrices.
transpose worldview matrix
The transpose of the world and view matrices
inverse transpose worldview matrix
The inverse transpose of the current concatenated world and view matrices.
Chapter 3: Scripts 83

viewproj matrix
The current view and projection matrices concatenated.
inverse viewproj matrix
The inverse of the view & projection matrices
transpose viewproj matrix
The transpose of the view & projection matrices
inverse transpose viewproj matrix
The inverse transpose of the view & projection matrices
worldviewproj matrix
The current world, view and projection matrices concatenated.
inverse worldviewproj matrix
The inverse of the world, view and projection matrices
transpose worldviewproj matrix
The transpose of the world, view and projection matrices
inverse transpose worldviewproj matrix
The inverse transpose of the world, view and projection matrices
texture matrix
The transform matrix of a given texture unit, as it would usually be seen in the
fixed-function pipeline. This requires an index in the ’extra params’ field, and
relates to the ’nth’ texture unit of the pass in question. NB if the given index
exceeds the number of texture units available for this pass, then the parameter
will be set to Matrix4::IDENTITY.
render target flipping
The value use to adjust transformed y position if bypassed projection matrix
transform. It’s -1 if the render target requires texture flipping, +1 otherwise.
vertex winding
Indicates what vertex winding mode the render state is in at this point; +1 for
standard, -1 for inverted (e.g. when processing reflections).
light diffuse colour
The diffuse colour of a given light; this requires an index in the ’extra params’
field, and relates to the ’nth’ closest light which could affect this object (i.e. 0
refers to the closest light - note that directional lights are always first in the list
and always present). NB if there are no lights this close, then the parameter
will be set to black.
light specular colour
The specular colour of a given light; this requires an index in the ’extra params’
field, and relates to the ’nth’ closest light which could affect this object (i.e.
0 refers to the closest light). NB if there are no lights this close, then the
parameter will be set to black.
light attenuation
A float4 containing the 4 light attenuation variables for a given light. This
requires an index in the ’extra params’ field, and relates to the ’nth’ closest
Chapter 3: Scripts 84

light which could affect this object (i.e. 0 refers to the closest light). NB
if there are no lights this close, then the parameter will be set to all zeroes.
The order of the parameters is range, constant attenuation, linear attenuation,
quadric attenuation.
spotlight params
A float4 containing the 3 spotlight parameters and a control value. The order of
the parameters is cos(inner angle /2 ), cos(outer angle / 2), falloff, and the final
w value is 1.0f. For non-spotlights the value is float4(1,0,0,1). This requires
an index in the ’extra params’ field, and relates to the ’nth’ closest light which
could affect this object (i.e. 0 refers to the closest light). If there are less lights
than this, the details are like a non-spotlight.
light position
The position of a given light in world space. This requires an index in the
’extra params’ field, and relates to the ’nth’ closest light which could affect this
object (i.e. 0 refers to the closest light). NB if there are no lights this close,
then the parameter will be set to all zeroes. Note that this property will work
with all kinds of lights, even directional lights, since the parameter is set as
a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional
lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work
consistently on both.
light direction
The direction of a given light in world space. This requires an index in the
’extra params’ field, and relates to the ’nth’ closest light which could affect this
object (i.e. 0 refers to the closest light). NB if there are no lights this close,
then the parameter will be set to all zeroes. DEPRECATED - this property
only works on directional lights, and we recommend that you use light position
instead since that returns a generic 4D vector.
light position object space
The position of a given light in object space (i.e. when the object is at (0,0,0)).
This requires an index in the ’extra params’ field, and relates to the ’nth’ closest
light which could affect this object (i.e. 0 refers to the closest light). NB if there
are no lights this close, then the parameter will be set to all zeroes. Note that
this property will work with all kinds of lights, even directional lights, since the
parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f)
whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot
products will work consistently on both.
light direction object space
The direction of a given light in object space (i.e. when the object is at (0,0,0)).
This requires an index in the ’extra params’ field, and relates to the ’nth’ closest
light which could affect this object (i.e. 0 refers to the closest light). NB
if there are no lights this close, then the parameter will be set to all zeroes.
DEPRECATED, except for spotlights - for directional lights we recommend
that you use light position object space instead since that returns a generic
4D vector.
Chapter 3: Scripts 85

light distance object space


The distance of a given light from the centre of the object - this is a useful
approximation to per-vertex distance calculations for relatively small objects.
This requires an index in the ’extra params’ field, and relates to the ’nth’ closest
light which could affect this object (i.e. 0 refers to the closest light). NB if there
are no lights this close, then the parameter will be set to all zeroes.

light position view space


The position of a given light in view space (i.e. when the camera is at (0,0,0)).
This requires an index in the ’extra params’ field, and relates to the ’nth’ closest
light which could affect this object (i.e. 0 refers to the closest light). NB if there
are no lights this close, then the parameter will be set to all zeroes. Note that
this property will work with all kinds of lights, even directional lights, since the
parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f)
whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot
products will work consistently on both.

light direction view space


The direction of a given light in view space (i.e. when the camera is at (0,0,0)).
This requires an index in the ’extra params’ field, and relates to the ’nth’
closest light which could affect this object (i.e. 0 refers to the closest light).
NB if there are no lights this close, then the parameter will be set to all zeroes.
DEPRECATED, except for spotlights - for directional lights we recommend
that you use light position view space instead since that returns a generic 4D
vector.

light power
The ’power’ scaling for a given light, useful in HDR rendering. This requires
an index in the ’extra params’ field, and relates to the ’nth’ closest light which
could affect this object (i.e. 0 refers to the closest light).

light diffuse colour power scaled


As light diffuse colour, except the RGB channels of the passed colour have been
pre-scaled by the light’s power scaling as given by light power.

light specular colour power scaled


As light specular colour, except the RGB channels of the passed colour have
been pre-scaled by the light’s power scaling as given by light power.

light number
When rendering, there is generally a list of lights available for use by all of the
passes for a given object, and those lights may or may not be referenced in
one or more passes. Sometimes it can be useful to know where in that overall
list a given light light (as seen from a pass) is. For example if you use iterate
once per light, the pass always sees the light as index 0, but in each iteration
the actual light referenced is different. This binding lets you pass through the
actual index of the light in that overall list. You just need to give it a parameter
of the pass-relative light number and it will map it to the overall list index.
Chapter 3: Scripts 86

light diffuse colour array


As light diffuse colour, except that this populates an array of parameters with
a number of lights, and the ’extra params’ field refers to the number of ’nth
closest’ lights to be processed. This parameter is not compatible with light-
based pass iteration options but can be used for single-pass lighting.
light specular colour array
As light specular colour, except that this populates an array of parameters
with a number of lights, and the ’extra params’ field refers to the number of
’nth closest’ lights to be processed. This parameter is not compatible with
light-based pass iteration options but can be used for single-pass lighting.
light diffuse colour power scaled array
As light diffuse colour power scaled, except that this populates an array of
parameters with a number of lights, and the ’extra params’ field refers to the
number of ’nth closest’ lights to be processed. This parameter is not compatible
with light-based pass iteration options but can be used for single-pass lighting.
light specular colour power scaled array
As light specular colour power scaled, except that this populates an array of
parameters with a number of lights, and the ’extra params’ field refers to the
number of ’nth closest’ lights to be processed. This parameter is not compatible
with light-based pass iteration options but can be used for single-pass lighting.
light attenuation array
As light attenuation, except that this populates an array of parameters with
a number of lights, and the ’extra params’ field refers to the number of ’nth
closest’ lights to be processed. This parameter is not compatible with light-
based pass iteration options but can be used for single-pass lighting.
spotlight params array
As spotlight params, except that this populates an array of parameters with
a number of lights, and the ’extra params’ field refers to the number of ’nth
closest’ lights to be processed. This parameter is not compatible with light-
based pass iteration options but can be used for single-pass lighting.
light position array
As light position, except that this populates an array of parameters with a
number of lights, and the ’extra params’ field refers to the number of ’nth
closest’ lights to be processed. This parameter is not compatible with light-
based pass iteration options but can be used for single-pass lighting.
light direction array
As light direction, except that this populates an array of parameters with a
number of lights, and the ’extra params’ field refers to the number of ’nth
closest’ lights to be processed. This parameter is not compatible with light-
based pass iteration options but can be used for single-pass lighting.
light position object space array
As light position object space, except that this populates an array of parame-
ters with a number of lights, and the ’extra params’ field refers to the number
Chapter 3: Scripts 87

of ’nth closest’ lights to be processed. This parameter is not compatible with


light-based pass iteration options but can be used for single-pass lighting.
light direction object space array
As light direction object space, except that this populates an array of param-
eters with a number of lights, and the ’extra params’ field refers to the number
of ’nth closest’ lights to be processed. This parameter is not compatible with
light-based pass iteration options but can be used for single-pass lighting.
light distance object space array
As light distance object space, except that this populates an array of parame-
ters with a number of lights, and the ’extra params’ field refers to the number
of ’nth closest’ lights to be processed. This parameter is not compatible with
light-based pass iteration options but can be used for single-pass lighting.
light position view space array
As light position view space, except that this populates an array of parameters
with a number of lights, and the ’extra params’ field refers to the number of
’nth closest’ lights to be processed. This parameter is not compatible with
light-based pass iteration options but can be used for single-pass lighting.
light direction view space array
As light direction view space, except that this populates an array of parame-
ters with a number of lights, and the ’extra params’ field refers to the number
of ’nth closest’ lights to be processed. This parameter is not compatible with
light-based pass iteration options but can be used for single-pass lighting.
light power array
As light power, except that this populates an array of parameters with a num-
ber of lights, and the ’extra params’ field refers to the number of ’nth clos-
est’ lights to be processed. This parameter is not compatible with light-based
pass iteration options but can be used for single-pass lighting.
light count
The total number of lights active in this pass.
light casts shadows
Sets an integer parameter to 1 if the given light casts shadows, 0 otherwise,
Requires a light index parameter.
ambient light colour
The colour of the ambient light currently set in the scene.
surface ambient colour
The ambient colour reflectance properties of the pass (See [ambient], page 25).
This allows you access to fixed-function pipeline property handily.
surface diffuse colour
The diffuse colour reflectance properties of the pass (See [diffuse], page 26).
This allows you access to fixed-function pipeline property handily.
surface specular colour
The specular colour reflectance properties of the pass (See [specular], page 26).
This allows you access to fixed-function pipeline property handily.
Chapter 3: Scripts 88

surface emissive colour


The amount of self-illumination of the pass (See [emissive], page 27). This
allows you access to fixed-function pipeline property handily.
surface shininess
The shininess of the pass, affecting the size of specular highlights (See [specular],
page 26). This allows you bind to fixed-function pipeline property handily.
derived ambient light colour
The derived ambient light colour, with ’r’, ’g’, ’b’ components filled with prod-
uct of surface ambient colour and ambient light colour, respectively, and ’a’
component filled with surface ambient alpha component.
derived scene colour
The derived scene colour, with ’r’, ’g’ and ’b’ components filled with sum of
derived ambient light colour and surface emissive colour, respectively, and ’a’
component filled with surface diffuse alpha component.
derived light diffuse colour
The derived light diffuse colour, with ’r’, ’g’ and ’b’ components filled with prod-
uct of surface diffuse colour, light diffuse colour and light power, respectively,
and ’a’ component filled with surface diffuse alpha component. This requires
an index in the ’extra params’ field, and relates to the ’nth’ closest light which
could affect this object (i.e. 0 refers to the closest light).
derived light specular colour
The derived light specular colour, with ’r’, ’g’ and ’b’ components filled with
product of surface specular colour and light specular colour, respectively, and
’a’ component filled with surface specular alpha component. This requires an
index in the ’extra params’ field, and relates to the ’nth’ closest light which
could affect this object (i.e. 0 refers to the closest light).
derived light diffuse colour array
As derived light diffuse colour, except that this populates an array of parame-
ters with a number of lights, and the ’extra params’ field refers to the number
of ’nth closest’ lights to be processed. This parameter is not compatible with
light-based pass iteration options but can be used for single-pass lighting.
derived light specular colour array
As derived light specular colour, except that this populates an array of param-
eters with a number of lights, and the ’extra params’ field refers to the number
of ’nth closest’ lights to be processed. This parameter is not compatible with
light-based pass iteration options but can be used for single-pass lighting.
fog colour The colour of the fog currently set in the scene.
fog params
The parameters of the fog currently set in the scene. Packed as (exp density,
linear start, linear end, 1.0 / (linear end - linear start)).
camera position
The current cameras position in world space.
Chapter 3: Scripts 89

camera position object space


The current cameras position in object space (i.e. when the object is at (0,0,0)).
lod camera position
The current LOD camera position in world space. A LOD camera is a separate
camera associated with the rendering camera which allows LOD calculations to
be calculated separately. The classic example is basing the LOD of the shadow
texture render on the position of the main camera, not the shadow camera.
lod camera position object space
The current LOD camera position in object space (i.e. when the object is at
(0,0,0)).
time The current time, factored by the optional parameter (or 1.0f if not supplied).
time 0 x Single float time value, which repeats itself based on "cycle time" given as an
’extra params’ field
costime 0 x
Cosine of time 0 x
sintime 0 x
Sine of time 0 x
tantime 0 x
Tangent of time 0 x
time 0 x packed
4-element vector of time0 x, sintime0 x, costime0 x, tantime0 x
time 0 1 As time0 x but scaled to [0..1]
costime 0 1
As costime0 x but scaled to [0..1]
sintime 0 1
As sintime0 x but scaled to [0..1]
tantime 0 1
As tantime0 x but scaled to [0..1]
time 0 1 packed
As time0 x packed but all values scaled to [0..1]
time 0 2pi
As time0 x but scaled to [0..2*Pi]
costime 0 2pi
As costime0 x but scaled to [0..2*Pi]
sintime 0 2pi
As sintime0 x but scaled to [0..2*Pi]
tantime 0 2pi
As tantime0 x but scaled to [0..2*Pi]
time 0 2pi packed
As time0 x packed but scaled to [0..2*Pi]
Chapter 3: Scripts 90

frame time
The current frame time, factored by the optional parameter (or 1.0f if not
supplied).
fps The current frames per second
viewport width
The current viewport width in pixels
viewport height
The current viewport height in pixels
inverse viewport width
1.0/the current viewport width in pixels
inverse viewport height
1.0/the current viewport height in pixels
viewport size
4-element vector of viewport width, viewport height, inverse viewport width,
inverse viewport height
texel offsets
Provides details of the rendersystem-specific texture coordinate offsets required
to map texels onto pixels. float4(horizontalOffset, verticalOffset, horizontalOff-
set / viewport width, verticalOffset / viewport height).
view direction
View direction vector in object space
view side vector
View local X axis
view up vector
View local Y axis
fov Vertical field of view, in radians
near clip distance
Near clip distance, in world units
far clip distance
Far clip distance, in world units (may be 0 for infinite view projection)
texture viewproj matrix
Applicable to vertex programs which have been specified as the ’shadow re-
ceiver’ vertex program alternative, or where a texture unit is marked as con-
tent type shadow; this provides details of the view/projection matrix for the
current shadow projector. The optional ’extra params’ entry specifies which
light the projector refers to (for the case of content type shadow where more
than one shadow texture may be present in a single pass), where 0 is the default
and refers to the first light referenced in this pass.
texture viewproj matrix array
As texture viewproj matrix, except an array of matrices is passed, up to the
number that you specify as the ’extra params’ value.
Chapter 3: Scripts 91

texture worldviewproj matrix


As texture viewproj matrix except it also includes the world matrix.
texture worldviewproj matrix array
As texture worldviewproj matrix, except an array of matrices is passed, up to
the number that you specify as the ’extra params’ value.
spotlight viewproj matrix
Provides a view / projection matrix which matches the set up of a given spot-
light (requires an ’extra params’ entry to indicate the light index, which must
be a spotlight). Can be used to project a texture from a given spotlight.
spotlight worldviewproj matrix
As spotlight viewproj matrix except it also includes the world matrix.
scene depth range
Provides information about the depth range as viewed from the current camera
being used to render. Provided as float4(minDepth, maxDepth, depthRange, 1
/ depthRange).
shadow scene depth range
Provides information about the depth range as viewed from the shadow camera
relating to a selected light. Requires a light index parameter. Provided as
float4(minDepth, maxDepth, depthRange, 1 / depthRange).
shadow colour
The shadow colour (for modulative shadows) as set via SceneMan-
ager::setShadowColour.
shadow extrusion distance
The shadow extrusion distance as determined by the range of a non-directional
light or set via SceneManager::setShadowDirectionalLightExtrusionDistance for
directional lights.
texture size
Provides texture size of the selected texture unit. Requires a texture unit index
parameter. Provided as float4(width, height, depth, 1). For 2D-texture, depth
sets to 1, for 1D-texture, height and depth sets to 1.
inverse texture size
Provides inverse texture size of the selected texture unit. Requires a texture
unit index parameter. Provided as float4(1 / width, 1 / height, 1 / depth, 1).
For 2D-texture, depth sets to 1, for 1D-texture, height and depth sets to 1.
packed texture size
Provides packed texture size of the selected texture unit. Requires a texture
unit index parameter. Provided as float4(width, height, 1 / width, 1 / height).
For 3D-texture, depth is ignored, for 1D-texture, height sets to 1.
pass number
Sets the active pass index number in a gpu parameter. The first pass in a
technique has an index of 0, the second an index of 1 and so on. This is useful
for multipass shaders (i.e. fur or blur shader) that need to know what pass it is.
Chapter 3: Scripts 92

By setting up the auto parameter in a [Default Program Parameters], page 66


list in a program definition, there is no requirement to set the pass number
parameter in each pass and lose track. (See [fur example], page 42)
pass iteration number
Useful for GPU programs that need to know what the current pass iteration
number is. The first iteration of a pass is numbered 0. The last iteration
number is one less than what is set for the pass iteration number. If a pass has
its iteration attribute set to 5 then the last iteration number (5th execution of
the pass) is 4.(See [iteration], page 41)
animation parametric
Useful for hardware vertex animation. For morph animation, sets the paramet-
ric value (0..1) representing the distance between the first position keyframe
(bound to positions) and the second position keyframe (bound to the first free
texture coordinate) so that the vertex program can interpolate between them.
For pose animation, indicates a group of up to 4 parametric weight values ap-
plying to a sequence of up to 4 poses (each one bound to x, y, z and w of
the constant), one for each pose. The original positions are held in the usual
position buffer, and the offsets to take those positions to the pose where weight
== 1.0 are in the first ’n’ free texture coordinates; ’n’ being determined by the
value passed to includes pose animation. If more than 4 simultaneous poses are
required, then you’ll need more than 1 shader constant to hold the parametric
values, in which case you should use this binding more than once, referencing a
different constant entry; the second one will contain the parametrics for poses
5-8, the third for poses 9-12, and so on.
custom This allows you to map a custom parameter on an individual Renderable (see
Renderable::setCustomParameter) to a parameter on a GPU program. It re-
quires that you complete the ’extra params’ field with the index that was used
in the Renderable::setCustomParameter call, and this will ensure that when-
ever this Renderable is used, it will have it’s custom parameter mapped in. It’s
very important that this parameter has been defined on all Renderables that
are assigned the material that contains this automatic mapping, otherwise the
process will fail.

param named
This is the same as param indexed, but uses a named parameter instead of an index. This
can only be used with high-level programs which include parameter names; if you’re using
an assembler program then you have no choice but to use indexes. Note that you can use
indexed parameters for high-level programs too, but it is less portable since if you reorder
your parameters in the high-level program the indexes will change.

format: param named <name> <type> <value>

example: param named shininess float4 10.0 0 0 0

The type is required because the program is not compiled and loaded when the material
Chapter 3: Scripts 93

script is parsed, so at this stage we have no idea what types the parameters are. Programs
are only loaded and compiled when they are used, to save memory.

param named auto


This is the named equivalent of param indexed auto, for use with high-level programs.

Format: param named auto <name> <value code> <extra params>

Example: param named auto worldViewProj WORLDVIEWPROJ MATRIX

The allowed value codes and the meaning of extra params are detailed in
[param indexed auto], page 81.

shared params ref


This option allows you to reference shared parameter sets as defined in [Declaring Shared
Parameters], page 66.

Format: shared params ref <shared set name>

Example: shared params ref mySharedParams

The only required parameter is a name, which must be the name of an already defined
shared parameter set. All named parameters which are present in the program that are
also present in the shared parameter set will be linked, and the shared parameters used as
if you had defined them locally. This is dependent on the definitions (type and array size)
matching between the shared set and the program.

Shadows and Vertex Programs


When using shadows (See Chapter 7 [Shadows], page 180), the use of vertex programs can
add some additional complexities, because Ogre can only automatically deal with everything
when using the fixed-function pipeline. If you use vertex programs, and you are also using
shadows, you may need to make some adjustments.

If you use stencil shadows, then any vertex programs which do vertex deformation can
be a problem, because stencil shadows are calculated on the CPU, which does not have
access to the modified vertices. If the vertex program is doing standard skeletal animation,
this is ok (see section above) because Ogre knows how to replicate the effect in software,
but any other vertex deformation cannot be replicated, and you will either have to accept
that the shadow will not reflect this deformation, or you should turn off shadows for that
object.
Chapter 3: Scripts 94

If you use texture shadows, then vertex deformation is acceptable; however, when ren-
dering the object into the shadow texture (the shadow caster pass), the shadow has to be
rendered in a solid colour (linked to the ambient colour). You must therefore provide an
alternative vertex program, so Ogre provides you with a way of specifying one to use when
rendering the caster. Basically you link an alternative vertex program, using exactly the
same syntax as the original vertex program link:
shadow_caster_vertex_program_ref myShadowCasterVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed_auto 4 ambient_light_colour
}
When rendering a shadow caster, Ogre will automatically use the alternate program.
You can bind the same or different parameters to the program - the most important thing
is that you bind ambiend light colour, since this determines the colour of the shadow in
modulative texture shadows. If you don’t supply an alternate program, Ogre will fall back
on a fixed-function material which will not reflect any vertex deformation you do in your
vertex program.

In addition, when rendering the shadow receivers with shadow textures, Ogre needs to
project the shadow texture. It does this automatically in fixed function mode, but if the
receivers use vertex programs, they need to have a shadow receiver program which does the
usual vertex deformation, but also generates projective texture coordinates. The additional
program linked into the pass like this:
shadow_receiver_vertex_program_ref myShadowReceiverVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed_auto 4 texture_viewproj_matrix
}
For the purposes of writing this alternate program, there is an automatic parameter
binding of ’texture viewproj matrix’ which provides the program with texture projection
parameters. The vertex program should do it’s normal vertex processing, and generate
texture coordinates using this matrix and place them in texture coord sets 0 and 1, since
some shadow techniques use 2 texture units. The colour of the vertices output by this vertex
program must always be white, so as not to affect the final colour of the rendered shadow.

When using additive texture shadows, the shadow pass render is actually the lighting
render, so if you perform any fragment program lighting you also need to pull in a custom
fragment program. You use the shadow receiver fragment program ref for this:
shadow_receiver_fragment_program_ref myShadowReceiverFragmentProgram
{
param_named_auto lightDiffuse light_diffuse_colour 0
Chapter 3: Scripts 95

}
You should pass the projected shadow coordinates from the custom vertex program. As
for textures, texture unit 0 will always be the shadow texture. Any other textures which
you bind in your pass will be carried across too, but will be moved up by 1 unit to make
room for the shadow texture. Therefore your shadow receiver fragment program is likely
to be the same as the bare lighting pass of your normal material, except that you insert an
extra texture sampler at index 0, which you will use to adjust the result by (modulating
diffuse and specular components).

3.1.10 Vertex Texture Fetch


Introduction
More recent generations of video card allow you to perform a read from a texture in the
vertex program rather than just the fragment program, as is traditional. This allows you
to, for example, read the contents of a texture and displace vertices based on the intensity
of the colour contained within.

Declaring the use of vertex texture fetching


Since hardware support for vertex texture fetching is not ubiquitous, you should use the
uses vertex texture fetch (See hundefinedi [Vertex texture fetching in vertex programs],
page hundefinedi) directive when declaring your vertex programs which use vertex textures,
so that if it is not supported, technique fallback can be enabled. This is not strictly necessary
for DirectX-targeted shaders, since vertex texture fetching is only supported in vs 3 0, which
can be stated as a required syntax in your shader definition, but for OpenGL (GLSL), there
are cards which support GLSL but not vertex textures, so you should be explicit about
your need for them.

Render system texture binding differences


Unfortunately the method for binding textures so that they are available to a vertex program
is not well standardised. As at the time of writing, Shader Model 3.0 (SM3.0) hardware
under DirectX9 include 4 separate sampler bindings for the purposes of vertex textures.
OpenGL, on the other hand, is able to access vertex textures in GLSL (and in assembler
through NV vertex program 3, although this is less popular), but the textures are shared
with the fragment pipeline. I expect DirectX to move to the GL model with the advent of
DirectX10, since a unified shader architecture implies sharing of texture resources between
the two stages. As it is right now though, we’re stuck with an inconsistent situation.

To reflect this, you should use the [binding type], page 51 attribute in a texture unit
to indicate which unit you are targeting with your texture - ’fragment’ (the default) or
’vertex’. For render systems that don’t have separate bindings, this actually does nothing.
But for those that do, it will ensure your texture gets bound to the right processing unit.
Note that whilst DirectX9 has separate bindings for the vertex and fragment pipelines,
binding a texture to the vertex processing unit still uses up a ’slot’ which is then not available
Chapter 3: Scripts 96

for use in the fragment pipeline. I didn’t manage to find this documented anywhere, but
the nVidia samples certainly avoid binding a texture to the same index on both vertex
and fragment units, and when I tried to do it, the texture did not appear correctly in the
fragment unit, whilst it did as soon as I moved it into the next unit.

Texture format limitations


Again as at the time of writing, the types of texture you can use in a vertex program are
limited to 1- or 4-component, full precision floating point formats. In code that equates to
PF FLOAT32 R or PF FLOAT32 RGBA. No other formats are supported. In addition,
the textures must be regular 2D textures (no cube or volume maps) and mipmapping and
filtering is not supported, although you can perform filtering in your vertex program if you
wish by sampling multiple times.

Hardware limitations
As at the time of writing (early Q3 2006), ATI do not support texture fetch in their current
crop of cards (Radeon X1n00). nVidia do support it in both their 6n00 and 7n00 range.
ATI support an alternative called ’Render to Vertex Buffer’, but this is not standardised
at this time and is very much different in its implementation, so cannot be considered to
be a drop-in replacement. This is the case even though the Radeon X1n00 cards claim to
support vs 3 0 (which requires vertex texture fetch).

3.1.11 Script Inheritence


When creating new script objects that are only slight variations of another object, it’s good
to avoid copying and pasting between scripts. Script inheritence lets you do this; in this
section we’ll use material scripts as an example, but this applies to all scripts parsed with
the script compilers in Ogre 1.6 onwards.

For example, to make a new material that is based on one previously defined, add a
colon : after the new material name followed by the name of the material that is to be
copied.

Format: material <NewUniqueChildName> : <ReferanceParentMaterial>

The only caveat is that a parent material must have been defined/parsed prior to the
child material script being parsed. The easiest way to achieve this is to either place parents
at the beginning of the material script file, or to use the ’import’ directive (See Section 3.1.14
[Script Import Directive], page 105). Note that inheritence is actually a copy - after scripts
are loaded into Ogre, objects no longer maintain their copy inheritance structure. If a
parent material is modified through code at runtime, the changes have no effect on child
materials that were copied from it in the script.
Chapter 3: Scripts 97

Material copying within the script alleviates some drudgery from copy/paste but having
the ability to identify specific techniques, passes, and texture units to modify makes material
copying easier. Techniques, passes, texture units can be identified directly in the child
material without having to layout previous techniques, passes, texture units by associating a
name with them, Techniques and passes can take a name and texture units can be numbered
within the material script. You can also use variables, See Section 3.1.13 [Script Variables],
page 104.

Names become very useful in materials that copy from other materials. In order to
override values they must be in the correct technique, pass, texture unit etc. The script
could be lain out using the sequence of techniques, passes, texture units in the child material
but if only one parameter needs to change in say the 5th pass then the first four passes
prior to the fifth would have to be placed in the script:

Here is an example:
material test2 : test1
{
technique
{
pass
{
}

pass
{
}

pass
{
}

pass
{
}

pass
{
ambient 0.5 0.7 0.3 1.0
}
}
}
This method is tedious for materials that only have slight variations to their parent. An
easier way is to name the pass directly without listing the previous passes:
Chapter 3: Scripts 98

material test2 : test1


{
technique 0
{
pass 4
{
ambient 0.5 0.7 0.3 1.0
}
}
}
The parent pass name must be known and the pass must be in the correct technique in
order for this to work correctly. Specifying the technique name and the pass name is the
best method. If the parent technique/pass are not named then use their index values for
their name as done in the example.

Adding new Techniques, Passes, to copied materials:


If a new technique or pass needs to be added to a copied material then use a unique name
for the technique or pass that does not exist in the parent material. Using an index for the
name that is one greater than the last index in the parent will do the same thing. The new
technique/pass will be added to the end of the techniques/passes copied from the parent
material.

Note: if passes or techniques aren’t given a name, they will take on a default name based
on their index. For example the first pass has index 0 so its name will be 0.

Identifying Texture Units to override values


A specific texture unit state (TUS) can be given a unique name within a pass of a material
so that it can be identified later in cloned materials that need to override specified texture
unit states in the pass without declaring previous texture units. Using a unique name for
a Texture unit in a pass of a cloned material adds a new texture unit at the end of the
texture unit list for the pass.

material BumpMap2 : BumpMap1


{
technique ati8500
{
pass 0
{
texture_unit NormalMap
{
texture BumpyMetalNM.png
}
Chapter 3: Scripts 99

}
}
}

Advanced Script Inheritence


Starting with Ogre 1.6, script objects can now inherit from each other more generally.
The previous concept of inheritance, material copying, was restricted only to the top-level
material objects. Now, any level of object can take advantage of inheritance (for instance,
techniques, passes, and compositor targets).

material Test
{
technique
{
pass : ParentPass
{
}
}
}
Notice that the pass inherits from ParentPass. This allows for the creation of more
fine-grained inheritance hierarchies.

Along with the more generalized inheritance system comes an important new keyword:
"abstract." This keyword is used at a top-level object declaration (not inside any other
object) to denote that it is not something that the compiler should actually attempt to
compile, but rather that it is only for the purpose of inheritance. For example, a material
declared with the abstract keyword will never be turned into an actual usable material in
the material framework. Objects which cannot be at a top-level in the document (like a
pass) but that you would like to declare as such for inheriting purpose must be declared
with the abstract keyword.

abstract pass ParentPass


{
diffuse 1 0 0 1
}
That declares the ParentPass object which was inherited from in the above example.
Notice the abstract keyword which informs the compiler that it should not attempt to
actually turn this object into any sort of Ogre resource. If it did attempt to do so, then it
would obviously fail, since a pass all on its own like that is not valid.

The final matching option is based on wildcards. Using the ’*’ character, you can make
a powerful matching scheme and override multiple objects at once, even if you don’t know
Chapter 3: Scripts 100

exact names or positions of those objects in the inherited object.

abstract technique Overrider


{
pass *color*
{
diffuse 0 0 0 0
}
}
This technique, when included in a material, will override all passes matching the wild-
card "*color*" (color has to appear in the name somewhere) and turn their diffuse properties
black. It does not matter their position or exact name in the inherited technique, this will
match them.

3.1.12 Texture Aliases


Texture aliases are useful for when only the textures used in texture units need to be specified
for a cloned material. In the source material i.e. the original material to be cloned, each
texture unit can be given a texture alias name. The cloned material in the script can then
specify what textures should be used for each texture alias. Note that texture aliases are
a more specific version of Section 3.1.13 [Script Variables], page 104 which can be used to
easily set other values.

Using texture aliases within texture units:


Format:
texture alias <name>

Default: <name> will default to texture unit <name> if set


texture_unit DiffuseTex
{
texture diffuse.jpg
}
texture alias defaults to DiffuseTex.

Example: The base material to be cloned:

material TSNormalSpecMapping
{
technique GLSL
{
pass
Chapter 3: Scripts 101

{
ambient 0.1 0.1 0.1
diffuse 0.7 0.7 0.7
specular 0.7 0.7 0.7 128

vertex_program_ref GLSLDemo/OffsetMappingVS
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named textureScale float 1.0
}

fragment_program_ref GLSLDemo/TSNormalSpecMappingFS
{
param_named normalMap int 0
param_named diffuseMap int 1
param_named fxMap int 2
}

// Normal map
texture_unit NormalMap
{
texture defaultNM.png
tex_coord_set 0
filtering trilinear
}

// Base diffuse texture map


texture_unit DiffuseMap
{
texture defaultDiff.png
filtering trilinear
tex_coord_set 1
}

// spec map for shininess


texture_unit SpecMap
{
texture defaultSpec.png
filtering trilinear
tex_coord_set 2
}

}
Chapter 3: Scripts 102

technique HLSL_DX9
{
pass
{

vertex_program_ref FxMap_HLSL_VS
{
param_named_auto worldViewProj_matrix worldviewproj_matrix
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
}

fragment_program_ref FxMap_HLSL_PS
{
param_named ambientColor float4 0.2 0.2 0.2 0.2
}

// Normal map
texture_unit
{
texture_alias NormalMap
texture defaultNM.png
tex_coord_set 0
filtering trilinear
}

// Base diffuse texture map


texture_unit
{
texture_alias DiffuseMap
texture defaultDiff.png
filtering trilinear
tex_coord_set 1
}

// spec map for shininess


texture_unit
{
texture_alias SpecMap
texture defaultSpec.png
filtering trilinear
tex_coord_set 2
}

}
Chapter 3: Scripts 103

}
Note that the GLSL and HLSL techniques use the same textures. For each texture usage
type a texture alias is given that describes what the texture is used for. So the first texture
unit in the GLSL technique has the same alias as the TUS in the HLSL technique since its
the same texture used. Same goes for the second and third texture units.
For demonstration purposes, the GLSL technique makes use of texture unit naming and
therefore the texture alias name does not have to be set since it defaults to the texture unit
name. So why not use the default all the time since its less typing? For most situations you
can. Its when you clone a material that and then want to change the alias that you must
use the texture alias command in the script. You cannot change the name of a texture unit
in a cloned material so texture alias provides a facility to assign an alias name.

Now we want to clone the material but only want to change the textures used. We could
copy and paste the whole material but if we decide to change the base material later then
we also have to update the copied material in the script. With set texture alias, copying a
material is very easy now. set texture alias is specified at the top of the material definition.
All techniques using the specified texture alias will be effected by set texture alias.

Format:
set texture alias <alias name> <texture name>

material fxTest : TSNormalSpecMapping


{
set_texture_alias NormalMap fxTestNMap.png
set_texture_alias DiffuseMap fxTestDiff.png
set_texture_alias SpecMap fxTestMap.png
}
The textures in both techniques in the child material will automatically get replaced
with the new ones we want to use.

The same process can be done in code as long you set up the texture alias names
so then there is no need to traverse technique/pass/TUS to change a texture. You just
call myMaterialPtr->applyTextureAliases(myAliasTextureNameList) which will update all
textures in all texture units that match the alias names in the map container reference you
passed as a parameter.

You don’t have to supply all the textures in the copied material.

material fxTest2 : fxTest


{
set_texture_alias DiffuseMap fxTest2Diff.png
Chapter 3: Scripts 104

set_texture_alias SpecMap fxTest2Map.png


}
Material fxTest2 only changes the diffuse and spec maps of material fxTest and uses the
same normal map.

Another example:
material fxTest3 : TSNormalSpecMapping
{
set_texture_alias DiffuseMap fxTest2Diff.png
}
fxTest3 will end up with the default textures for the normal map and spec map setup
in TSNormalSpecMapping material but will have a different diffuse map. So your base
material can define the default textures to use and then the child materials can override
specific textures.

3.1.13 Script Variables


A very powerful new feature in Ogre 1.6 is variables. Variables allow you to parameterize
data in materials so that they can become more generalized. This enables greater reuse of
scripts by targeting specific customization points. Using variables along with inheritance
allows for huge amounts of overrides and easy object reuse.

abstract pass ParentPass


{
diffuse $diffuse_colour
}

material Test
{
technique
{
pass : ParentPass
{
set $diffuse_colour "1 0 0 1"
}
}
}
The ParentPass object declares a variable called "diffuse colour" which is then overrid-
den in the Test material’s pass. The "set" keyword is used to set the value of that variable.
The variable assignment follows lexical scoping rules, which means that the value of "1 0 0
1" is only valid inside that pass definition. Variable assignment in outer scopes carry over
into inner scopes.
Chapter 3: Scripts 105

material Test
{
set $diffuse_colour "1 0 0 1"
technique
{
pass : ParentPass
{
}
}
}
The $diffuse colour assignment carries down through the technique and into the pass.

3.1.14 Script Import Directive


Imports are a feature introduced to remove ambiguity from script dependencies. When
using scripts that inherit from each other but which are defined in separate files sometimes
errors occur because the scripts are loaded in incorrect order. Using imports removes this
issue. The script which is inheriting another can explicitly import its parent’s definition
which will ensure that no errors occur because the parent’s definition was not found.

import * from "parent.material"


material Child : Parent
{
}
The material "Parent" is defined in parent.material and the import ensures that those
definitions are found properly. You can also import specific targets from within a file.
import Parent from "parent.material"
If there were other definitions in the parent.material file, they would not be imported.

Note, however that importing does not actually cause objects in the imported script to
be fully parsed & created, it just makes the definitions available for inheritence. This has a
specific ramification for vertex / fragment program definitions, which must be loaded before
any parameters can be specified. You should continue to put common program definitions
in .program files to ensure they are fully parsed before being referenced in multiple .ma-
terial files. The ’import’ command just makes sure you can resolve dependencies between
equivalent script definitions (e.g. material to material).
Chapter 3: Scripts 106

3.2 Compositor Scripts


The compositor framework is a subsection of the OGRE API that allows you to easily
define full screen post-processing effects. Compositor scripts offer you the ability to define
compositor effects in a script which can be reused and modified easily, rather than having to
use the API to define them. You still need to use code to instantiate a compositor against
one of your visible viewports, but this is a much simpler process than actually defining the
compositor itself.

Compositor Fundamentals
Performing post-processing effects generally involves first rendering the scene to a texture,
either in addition to or instead of the main window. Once the scene is in a texture, you
can then pull the scene image into a fragment program and perform operations on it by
rendering it through full screen quad. The target of this post processing render can be the
main result (e.g. a window), or it can be another render texture so that you can perform
multi-stage convolutions on the image. You can even ’ping-pong’ the render back and forth
between a couple of render textures to perform convolutions which require many iterations,
without using a separate texture for each stage. Eventually you’ll want to render the result
to the final output, which you do with a full screen quad. This might replace the whole
window (thus the main window doesn’t need to render the scene itself), or it might be a
combinational effect.

So that we can discuss how to implement these techniques efficiently, a number of defi-
nitions are required:

Compositor
Definition of a fullscreen effect that can be applied to a user viewport. This is
what you’re defining when writing compositor scripts as detailed in this section.
Compositor Instance
An instance of a compositor as applied to a single viewport. You create these
based on compositor definitions, See Section 3.2.4 [Applying a Compositor],
page 120.
Compositor Chain
It is possible to enable more than one compositor instance on a viewport at the
same time, with one compositor taking the results of the previous one as input.
This is known as a compositor chain. Every viewport which has at least one
compositor attached to it has a compositor chain. See Section 3.2.4 [Applying
a Compositor], page 120
Target This is a RenderTarget, i.e. the place where the result of a series of render
operations is sent. A target may be the final output (and this is implicit, you
don’t have to declare it), or it may be an intermediate render texture, which
you declare in your script with the [compositor texture], page 109. A target
Chapter 3: Scripts 107

which is not the output target has a defined size and pixel format which you
can control.
Output Target
As Target, but this is the single final result of all operations. The size and pixel
format of this target cannot be controlled by the compositor since it is defined
by the application using it, thus you don’t declare it in your script. However,
you do declare a Target Pass for it, see below.
Target Pass
A Target may be rendered to many times in the course of a composition effect.
In particular if you ’ping pong’ a convolution between a couple of textures, you
will have more than one Target Pass per Target. Target passes are declared in
the script using a Section 3.2.2 [Compositor Target Passes], page 112, the latter
being the final output target pass, of which there can be only one.
Pass Within a Target Pass, there are one or more individual Section 3.2.3 [Compos-
itor Passes], page 114, which perform a very specific action, such as rendering
the original scene (or pulling the result from the previous compositor in the
chain), rendering a fullscreen quad, or clearing one or more buffers. Typically
within a single target pass you will use the either a ’render scene’ pass or a
’render quad’ pass, not both. Clear can be used with either type.

Loading scripts
Compositor scripts are loaded when resource groups are initialised: OGRE looks in all
resource locations associated with the group (see Root::addResourceLocation) for files with
the ’.compositor’ extension and parses them. If you want to parse files manually, use
CompositorSerializer::parseScript.

Format
Several compositors may be defined in a single script. The script format is pseudo-C++,
with sections delimited by curly braces (’’, ’’), and comments indicated by starting a line
with ’//’ (note, no nested form comments allowed). The general format is shown below in
the example below:

// This is a comment
// Black and white effect
compositor B&W
{
technique
{
// Temporary textures
texture rt0 target_width target_height PF_A8R8G8B8
Chapter 3: Scripts 108

target rt0
{
// Render output from previous compositor (or original scene)
input previous
}

target_output
{
// Start with clear output
input none
// Draw a fullscreen quad with the black and white image
pass render_quad
{
// Renders a fullscreen quad with a material
material Ogre/Compositor/BlackAndWhite
input 0 rt0
}
}
}
}
Every compositor in the script must be given a name, which is the line ’compositor
<name>’ before the first opening ’’. This name must be globally unique. It can include path
characters (as in the example) to logically divide up your compositors, and also to avoid
duplicate names, but the engine does not treat the name as hierarchical, just as a string.
Names can include spaces but must be surrounded by double quotes ie compositor "My
Name".

The major components of a compositor are the Section 3.2.1 [Compositor Techniques],
page 108, the Section 3.2.2 [Compositor Target Passes], page 112 and the Section 3.2.3
[Compositor Passes], page 114, which are covered in detail in the following sections.

3.2.1 Techniques
A compositor technique is much like a Section 3.1.1 [Techniques], page 21 in that it describes
one approach to achieving the effect you’re looking for. A compositor definition can have
more than one technique if you wish to provide some fallback should the hardware not
support the technique you’d prefer to use. Techniques are evaluated for hardware support
based on 2 things:

Material support
All Section 3.2.3 [Compositor Passes], page 114 that render a fullscreen quad
use a material; for the technique to be supported, all of the materials refer-
enced must have at least one supported material technique. If they don’t, the
compositor technique is marked as unsupported and won’t be used.
Chapter 3: Scripts 109

Texture format support


This one is slightly more complicated. When you request a
[compositor texture], page 109 in your technique, you request a pixel
format. Not all formats are natively supported by hardware, especially the
floating point formats. However, in this case the hardware will typically
downgrade the texture format requested to one that the hardware does
support - with compositor effects though, you might want to use a different
approach if this is the case. So, when evaluating techniques, the compositor
will first look for native support for the exact pixel format you’ve asked for,
and will skip onto the next technique if it is not supported, thus allowing you
to define other techniques with simpler pixel formats which use a different
approach. If it doesn’t find any techniques which are natively supported, it
tries again, this time allowing the hardware to downgrade the texture format
and thus should find at least some support for what you’ve asked for.

As with material techniques, compositor techniques are evaluated in the order you define
them in the script, so techniques declared first are preferred over those declared later.
Format: technique

Techniques can have the following nested elements:


• [compositor texture], page 109
• [compositor texture ref], page 111
• [compositor scheme], page 111
• [compositor logic], page 111
• Section 3.2.2 [Compositor Target Passes], page 112
• Section 3.2.2 [Compositor Target Passes], page 112

texture
This declares a render texture for use in subsequent Section 3.2.2 [Compositor Target
Passes], page 112.

Format: texture <Name> <Width> <Height> <Pixel Format> [<MRT Pixel Format2>]
[<MRT Pixel FormatN>] [pooled] [gamma] [no fsaa] [<scope>]

Here is a description of the parameters:

Name A name to give the render texture, which must be unique within this compos-
itor. This name is used to reference the texture in Section 3.2.2 [Compositor
Target Passes], page 112, when the texture is rendered to, and in Section 3.2.3
[Compositor Passes], page 114, when the texture is used as input to a material
rendering a fullscreen quad.
Chapter 3: Scripts 110

Width, Height
The dimensions of the render texture. You can either specify a fixed width
and height, or you can request that the texture is based on the physical di-
mensions of the viewport to which the compositor is attached. The options
for the latter are ’target width’, ’target height’, ’target width scaled <factor>’
and ’target height scaled <factor>’ - where ’factor’ is the amount by which you
wish to multiply the size of the main target to derive the dimensions.
Pixel Format
The pixel format of the render texture. This affects how much memory it
will take, what colour channels will be available, and what precision you
will have within those channels. The available options are PF A8R8G8B8,
PF R8G8B8A8, PF R8G8B8, PF FLOAT16 RGBA, PF FLOAT16 RGB,
PF FLOAT16 R, PF FLOAT32 RGBA, PF FLOAT32 RGB, and
PF FLOAT32 R.
pooled If present, this directive makes this texture ’pooled’ among compositor in-
stances, which can save some memory.
gamma If present, this directive means that sRGB gamma correction will be enabled
on writes to this texture. You should remember to include the opposite sRGB
conversion when you read this texture back in another material, such as a quad.
This option will automatically enabled if you use a render scene pass on this
texture and the viewport on which the compositor is based has sRGB write
support enabled.
no fsaa If present, this directive disables the use of anti-aliasing on this texture. FSAA
is only used if this texture is subject to a render scene pass and FSAA was
enabled on the original viewport on which this compositor is based; this option
allows you to override it and disable the FSAA if you wish.
scope If present, this directive sets the scope for the texture for being accessed by other
compositors using the [compositor texture ref], page 111 directive. There are
three options : ’local scope’ (which is also the default) means that only the
compositor defining the texture can access it. ’chain scope’ means that the
compositors after this compositor in the chain can reference its textures, and
’global scope’ means that the entire application can access the texture. This
directive also affects the creation of the textures (global textures are created
once and thus can’t be used with the pooled directive, and can’t rely on viewport
size).
Example: texture rt0 512 512 PF R8G8B8A8
Example: texture rt1 target width target height PF FLOAT32 RGB

You can in fact repeat this element if you wish. If you do so, that means that this
render texture becomes a Multiple Render Target (MRT), when the GPU writes to multiple
textures at once. It is imperative that if you use MRT that the shaders that render to it
render to ALL the targets. Not doing so can cause undefined results. It is also important
to note that although you can use different pixel formats for each target in a MRT, each
Chapter 3: Scripts 111

one should have the same total bit depth since most cards do not support independent bit
depths. If you try to use this feature on cards that do not support the number of MRTs
you’ve asked for, the technique will be skipped (so you ought to write a fallback technique).
Example : texture mrt output target width target height PF FLOAT16 RGBA
PF FLOAT16 RGBA chain scope

texture ref
This declares a reference of a texture from another compositor to be used in this compositor.

Format: texture ref <Local Name> <Reference Compositor> <Reference Texture Name>
Here is a description of the parameters:
Local Name
A name to give the referenced texture, which must be unique within this com-
positor. This name is used to reference the texture in Section 3.2.2 [Compositor
Target Passes], page 112, when the texture is rendered to, and in Section 3.2.3
[Compositor Passes], page 114, when the texture is used as input to a material
rendering a fullscreen quad.
Reference Compositor
The name of the compositor that we are referencing a texture from
Reference Texture Name
The name of the texture in the compositor that we are referencing
Make sure that the texture being referenced is scoped accordingly (either chain or global
scope) and placed accordingly during chain creation (if referencing a chain-scoped texture,
the compositor must be present in the chain and placed before the compositor referencing
it).
Example : texture ref GBuffer GBufferCompositor mrt output

scheme
This gives a compositor technique a scheme name, allowing you to manually switch be-
tween different techniques for this compositor when instantiated on a viewport by calling
CompositorInstance::setScheme.

Format: material scheme <Name>

compositor logic
This connects between a compositor and code that it requires in order to function correctly.
When an instance of this compositor will be created, the compositor logic will be notified
and will have the chance to prepare the compositor’s operation (for example, adding a
listener).
Chapter 3: Scripts 112

Format: compositor logic <Name>


Registration of compositor logics is done by name through CompositorMan-
ager::registerCompositorLogic.

3.2.2 Target Passes


A target pass is the action of rendering to a given target, either a render texture or the final
output. You can update the same render texture multiple times by adding more than one
target pass to your compositor script - this is very useful for ’ping pong’ renders between a
couple of render textures to perform complex convolutions that cannot be done in a single
render, such as blurring.

There are two types of target pass, the sort that updates a render texture:

Format: target <Name>

... and the sort that defines the final output render:

Format: target output

The contents of both are identical, the only real difference is that you can only have a
single target output entry, whilst you can have many target entries. Here are the attributes
you can use in a ’target’ or ’target output’ section of a .compositor script:
• [compositor target input], page 112
• [only initial], page 113
• [visibility mask], page 113
• [compositor lod bias], page 113
• [material scheme], page 114
• [compositor shadows], page 113
• Section 3.2.3 [Compositor Passes], page 114

Attribute Descriptions
input
Sets input mode of the target, which tells the target pass what is pulled in before any of its
own passes are rendered.

Format: input (none | previous)

Default: input none


Chapter 3: Scripts 113

none The target will have nothing as input, all the contents of the target must be
generated using its own passes. Note this does not mean the target will be
empty, just no data will be pulled in. For it to truly be blank you’d need a
’clear’ pass within this target.
previous The target will pull in the previous contents of the viewport. This will be either
the original scene if this is the first compositor in the chain, or it will be the
output from the previous compositor in the chain if the viewport has multiple
compositors enabled.

only initial
If set to on, this target pass will only execute once initially after the effect has been enabled.
This could be useful to perform once-off renders, after which the static contents are used
by the rest of the compositor.

Format: only initial (on | off)

Default: only initial off

visibility mask
Sets the visibility mask for any render scene passes performed in this target pass. This
is a bitmask (although it must be specified as decimal, not hex) and maps to SceneMan-
ager::setVisibilityMask. Format: visibility mask <mask>

Default: visibility mask 4294967295

lod bias
Set the scene LOD bias for any render scene passes performed in this target pass. The
default is 1.0, everything below that means lower quality, higher means higher quality.

Format: lod bias <lodbias>

Default: lod bias 1.0

shadows
Sets whether shadows should be rendered during any render scene pass performed in this
target pass. The default is ’on’.

Format: shadows (on | off)

Default: shadows on
Chapter 3: Scripts 114

material scheme
If set, indicates the material scheme to use for any render scene pass. Useful for performing
special-case rendering effects.

Format: material scheme <scheme name>

Default: None

3.2.3 Compositor Passes


A pass is a single rendering action to be performed in a target pass.

Format: ’pass’ (render quad | clear | stencil | render scene | render custom) [custom
name]

There are four types of pass:


clear This kind of pass sets the contents of one or more buffers in the target to a
fixed value. So this could clear the colour buffer to a fixed colour, set the depth
buffer to a certain set of contents, fill the stencil buffer with a value, or any
combination of the above.
stencil This kind of pass configures stencil operations for the subsequent passes. It
can set the stencil compare function, operations and reference values for you to
perform your own stencil effects.
render scene
This kind of pass performs a regular rendering of the scene. It will
use the [visibility mask], page 113, [compositor lod bias], page 113, and
[material scheme], page 114 from the parent target pass.
render quad
This kind of pass renders a quad over the entire render target, using a given
material. You will undoubtedly want to pull in the results of other target passes
into this operation to perform fullscreen effects.
render custom
This kind of pass is just a callback to user code for the composition
pass specified in the custom name (and registered via CompositorMan-
ager::registerCustomCompositionPass) and allows the user to create custom
render operations for more advanced effects. This is the only pass type that
requires the custom name parameter.
Here are the attributes you can use in a ’pass’ section of a .compositor script:
Chapter 3: Scripts 115

Available Pass Attributes


• [material], page 115
• [compositor pass input], page 115
• [compositor pass identifier], page 115
• [first render queue], page 116
• [last render queue], page 116
• [compositor pass material scheme], page 116
• [compositor clear], page 116
• [compositor stencil], page 117

material
For passes of type ’render quad’, sets the material used to render the quad. You
will want to use shaders in this material to perform fullscreen effects, and use the
[compositor pass input], page 115 attribute to map other texture targets into the texture
bindings needed by this material.

Format: material <Name>

input
For passes of type ’render quad’, this is how you map one or more local render textures
(See [compositor texture], page 109) into the material you’re using to render the fullscreen
quad. To bind more than one texture, repeat this attribute with different sampler indexes.

Format: input <sampler> <Name> [<MRTIndex>]

sampler The texture sampler to set, must be a number in the range [0,
OGRE MAX TEXTURE LAYERS-1].
Name The name of the local render texture to bind, as declared in
[compositor texture], page 109 and rendered to in one or more
Section 3.2.2 [Compositor Target Passes], page 112.
MRTIndex
If the local texture that you’re referencing is a Multiple Render Target (MRT),
this identifies the surface from the MRT that you wish to reference (0 is the
first surface, 1 the second etc).
Example: input 0 rt0

identifier
Associates a numeric identifier with the pass. This is useful for registering a listener with
the compositor (CompositorInstance::addListener), and being able to identify which pass it
is that’s being processed when you get events regarding it. Numbers between 0 and 2^32
Chapter 3: Scripts 116

are allowed.

Format: identifier <number>

Example: identifier 99945

Default: identifier 0

first render queue


For passes of type ’render scene’, this sets the first render queue id that is included in the
render. Defaults to the value of RENDER QUEUE SKIES EARLY.

Format: first render queue <id>

Default: first render queue 0

last render queue


For passes of type ’render scene’, this sets the last render queue id that is included in the
render. Defaults to the value of RENDER QUEUE SKIES LATE.

Format: last render queue <id>

Default: last render queue 95

material scheme
If set, indicates the material scheme to use for this pass only. Useful for performing special-
case rendering effects.

This will overwrite the scheme if set at the target scope as well.

Format: material scheme <scheme name>

Default: None

Clear Section
For passes of type ’clear’, this section defines the buffer clearing parameters.

Format: clear
Chapter 3: Scripts 117

Here are the attributes you can use in a ’clear’ section of a .compositor script:
• [compositor clear buffers], page 117
• [compositor clear colour value], page 117
• [compositor clear depth value], page 117
• [compositor clear stencil value], page 117

buffers
Sets the buffers cleared by this pass.

Format: buffers [colour] [depth] [stencil]

Default: buffers colour depth

colour value
Set the colour used to fill the colour buffer by this pass, if the colour buffer is being
cleared ([compositor clear buffers], page 117).

Format: colour value <red> <green> <blue> <alpha>

Default: colour value 0 0 0 0

depth value
Set the depth value used to fill the depth buffer by this pass, if the depth buffer is being
cleared ([compositor clear buffers], page 117).

Format: depth value <depth>

Default: depth value 1.0

stencil value
Set the stencil value used to fill the stencil buffer by this pass, if the stencil buffer is
being cleared ([compositor clear buffers], page 117).

Format: stencil value <value>

Default: stencil value 0.0


Chapter 3: Scripts 118

Stencil Section
For passes of type ’stencil’, this section defines the stencil operation parameters.

Format: stencil

Here are the attributes you can use in a ’stencil’ section of a .compositor script:
• [compositor stencil check], page 118
• [compositor stencil comp func], page 118
• [compositor stencil ref value], page 118
• [compositor stencil mask], page 119
• [compositor stencil fail op], page 119
• [compositor stencil depth fail op], page 119
• [compositor stencil pass op], page 120
• [compositor stencil two sided], page 120

check
Enables or disables the stencil check, thus enabling the use of the rest of the features
in this section. The rest of the options in this section do nothing if the stencil check is
off. Format: check (on | off)

comp func
Sets the function used to perform the following comparison:

(ref value & mask) comp func (Stencil Buffer Value & mask)

What happens as a result of this comparison will be one of 3 actions on the stencil
buffer, depending on whether the test fails, succeeds but with the depth buffer check
still failing, or succeeds with the depth buffer check passing too. You set the actions
in the [compositor stencil fail op], page 119, [compositor stencil depth fail op],
page 119 and [compositor stencil pass op], page 120 respectively. If the stencil check
fails, no colour or depth are written to the frame buffer.

Format: comp func (always fail | always pass | less | less equal | not equal |
greater equal | greater)

Default: comp func always pass


Chapter 3: Scripts 119

ref value
Sets the reference value used to compare with the stencil buffer as described in
[compositor stencil comp func], page 118.

Format: ref value <value>

Default: ref value 0.0

mask
Sets the mask used to compare with the stencil buffer as described in
[compositor stencil comp func], page 118.

Format: mask <value>

Default: mask 4294967295

fail op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 118) and depth comparison is that both fail.

Format: fail op (keep | zero | replace | increment | decrement | increment wrap |


decrement wrap | invert)

Default: depth fail op keep

These actions mean:


keep Leave the stencil buffer unchanged.
zero Set the stencil value to zero.
replace Set the stencil value to the reference value.
increment Add one to the stencil value, clamping at the maximum value.
decrement Subtract one from the stencil value, clamping at 0.
increment wrap
Add one to the stencil value, wrapping back to 0 at the maximum.
decrement wrap
Subtract one from the stencil value, wrapping to the maximum below 0.
invert invert the stencil value.

depth fail op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 118) passes but the depth comparison fails.
Chapter 3: Scripts 120

Format: depth fail op (keep | zero | replace | increment | decrement | increment wrap
| decrement wrap | invert)

Default: depth fail op keep

pass op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 118) and the depth comparison pass.

Format: pass op (keep | zero | replace | increment | decrement | increment wrap |


decrement wrap | invert)

Default: pass op keep

two sided
Enables or disables two-sided stencil operations, which means the inverse of the oper-
ations applies to back-facing polygons.

Format: two sided (on | off)

Default: two sided off

3.2.4 Applying a Compositor


Adding a compositor instance to a viewport is very simple. All you need to do is:

CompositorManager::getSingleton().addCompositor(viewport, compositorName);

Where viewport is a pointer to your viewport, and compositorName is the name of the
compositor to create an instance of. By doing this, a new instance of a compositor will
be added to a new compositor chain on that viewport. You can call the method multiple
times to add further compositors to the chain on this viewport. By default, each compositor
which is added is disabled, but you can change this state by calling:

CompositorManager::getSingleton().setCompositorEnabled(viewport, compositorName, enabledOrDi


Chapter 3: Scripts 121

For more information on defining and using compositors, see Demo Compositor in the
Samples area, together with the Examples.compositor script in the media area.

3.3 Particle Scripts


Particle scripts allow you to define particle systems to be instantiated in your code without
having to hard-code the settings themselves in your source code, allowing a very quick
turnaround on any changes you make. Particle systems which are defined in scripts are
used as templates, and multiple actual systems can be created from them at runtime.

Loading scripts
Particle system scripts are loaded at initialisation time by the system: by default it looks in
all common resource locations (see Root::addResourceLocation) for files with the ’.particle’
extension and parses them. If you want to parse files with a different extension, use the Par-
ticleSystemManager::getSingleton().parseAllSources method with your own extension, or if
you want to parse an individual file, use ParticleSystemManager::getSingleton().parseScript.

Once scripts have been parsed, your code is free to instantiate systems based on them
using the SceneManager::createParticleSystem() method which can take both a name for
the new system, and the name of the template to base it on (this template name is in the
script).

Format
Several particle systems may be defined in a single script. The script format is pseudo-C++,
with sections delimited by curly braces (), and comments indicated by starting a line with
’//’ (note, no nested form comments allowed). The general format is shown below in a
typical example:
// A sparkly purple fountain
particle_system Examples/PurpleFountain
{
material Examples/Flare2
particle_width 20
particle_height 20
cull_each false
quota 10000
billboard_type oriented_self

// Area emitter
emitter Point
Chapter 3: Scripts 122

{
angle 15
emission_rate 75
time_to_live 3
direction 0 1 0
velocity_min 250
velocity_max 300
colour_range_start 1 0 0
colour_range_end 0 0 1
}

// Gravity
affector LinearForce
{
force_vector 0 -100 0
force_application add
}

// Fader
affector ColourFader
{
red -0.25
green -0.25
blue -0.25
}
}

Every particle system in the script must be given a name, which is the line before the
first opening ’’, in the example this is ’Examples/PurpleFountain’. This name must be
globally unique. It can include path characters (as in the example) to logically divide up
your particle systems, and also to avoid duplicate names, but the engine does not treat the
name as hierarchical, just as a string.

A system can have top-level attributes set using the scripting commands available, such
as ’quota’ to set the maximum number of particles allowed in the system. Emitters (which
create particles) and affectors (which modify particles) are added as nested definitions within
the script. The parameters available in the emitter and affector sections are entirely depen-
dent on the type of emitter / affector.

For a detailed description of the core particle system attributes, see the list below:

Available Particle System Attributes


• [quota], page 123
Chapter 3: Scripts 123

• [particle material], page 123


• [particle width], page 124
• [particle height], page 124
• [cull each], page 124
• [billboard type], page 125
• [billboard origin], page 126
• [billboard rotation type], page 127
• [common direction], page 127
• [common up vector], page 128
• [particle renderer], page 124
• [particle sorted], page 125
• [particle localspace], page 125
• [particle point rendering], page 128
• [particle accurate facing], page 129
• [iteration interval], page 129
• [nonvisible update timeout], page 129
See also: Section 3.3.2 [Particle Emitters], page 130, Section 3.3.5 [Particle Affectors],
page 137

3.3.1 Particle System Attributes


This section describes to attributes which you can set on every particle system using scripts.
All attributes have default values so all settings are optional in your script.

quota
Sets the maximum number of particles this system is allowed to contain at one time. When
this limit is exhausted, the emitters will not be allowed to emit any more particles until
some destroyed (e.g. through their time to live running out). Note that you will almost
always want to change this, since it defaults to a very low value (particle pools are only
ever increased in size, never decreased).

format: quota <max particles>


example: quota 10000
default: 10

material
Sets the name of the material which all particles in this system will use. All particles in a
system use the same material, although each particle can tint this material through the use
of it’s colour property.
Chapter 3: Scripts 124

format: material <material name>


example: material Examples/Flare
default: none (blank material)

particle width
Sets the width of particles in world coordinates. Note that this property is absolute when
billboard type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the
length of the direction vector when billboard type is ’oriented common’, ’oriented self’ or
’perpendicular common’.

format: particle width <width>


example: particle width 20
default: 100

particle height
Sets the height of particles in world coordinates. Note that this property is absolute when
billboard type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the
length of the direction vector when billboard type is ’oriented common’, ’oriented self’ or
’perpendicular common’.

format: particle height <height>


example: particle height 20
default: 100

cull each
All particle systems are culled by the bounding box which contains all the particles in the
system. This is normally sufficient for fairly locally constrained particle systems where
most particles are either visible or not visible together. However, for those that spread
particles over a wider area (e.g. a rain system), you may want to actually cull each particle
individually to save on time, since it is far more likely that only a subset of the particles
will be visible. You do this by setting the cull each parameter to true.

format: cull each <true|false>


example: cull each true
default: false

renderer
Particle systems do not render themselves, they do it through ParticleRenderer classes.
Those classes are registered with a manager in order to provide particle systems with a
Chapter 3: Scripts 125

particular ’look’. OGRE comes configured with a default billboard-based renderer, but
more can be added through plugins. Particle renders are registered with a unique name,
and you can use that name in this attribute to determine the renderer to use. The default
is ’billboard’.

Particle renderers can have attributes, which can be passed by setting them on the root
particle system.

format: renderer <renderer name>


default: billboard

sorted
By default, particles are not sorted. By setting this attribute to ’true’, the particles will be
sorted with respect to the camera, furthest first. This can make certain rendering effects
look better at a small sorting expense.

format: sorted <true|false>


default: false

local space
By default, particles are emitted into world space, such that if you transform the node to
which the system is attached, it will not affect the particles (only the emitters). This tends
to give the normal expected behaviour, which is to model how real world particles travel
independently from the objects they are emitted from. However, to create some effects you
may want the particles to remain attached to the local space the emitter is in and to follow
them directly. This option allows you to do that.

format: local space <true|false>


default: false

billboard type
This is actually an attribute of the ’billboard’ particle renderer (the default), and is an
example of passing attributes to a particle renderer by declaring them directly within the
system declaration. Particles using the default renderer are rendered using billboards, which
are rectangles formed by 2 triangles which rotate to face the given direction. However, there
is more than 1 way to orient a billboard. The classic approach is for the billboard to directly
face the camera: this is the default behaviour. However this arrangement only looks good
for particles which are representing something vaguely spherical like a light flare. For more
linear effects like laser fire, you actually want the particle to have an orientation of it’s own.
Chapter 3: Scripts 126

format: billboard type <point|oriented common|oriented self|perpendicular common|perpendicular self>

example: billboard type oriented self


default: point

The options for this parameter are:


point The default arrangement, this approximates spherical particles and the bill-
boards always fully face the camera.
oriented common
Particles are oriented around a common, typically fixed direction vector (see
[common direction], page 127), which acts as their local Y axis. The billboard
rotates only around this axis, giving the particle some sense of direction. Good
for rainstorms, starfields etc where the particles will traveling in one direction
- this is slightly faster than oriented self (see below).
oriented self
Particles are oriented around their own direction vector, which acts as their
local Y axis. As the particle changes direction, so the billboard reorients itself
to face this way. Good for laser fire, fireworks and other ’streaky’ particles that
should look like they are traveling in their own direction.
perpendicular common
Particles are perpendicular to a common, typically fixed direction vector (see
[common direction], page 127), which acts as their local Z axis, and their lo-
cal Y axis coplanar with common direction and the common up vector (see
[common up vector], page 128). The billboard never rotates to face the cam-
era, you might use double-side material to ensure particles never culled by
back-facing. Good for aureolas, rings etc where the particles will perpendicular
to the ground - this is slightly faster than perpendicular self (see below).
perpendicular self
Particles are perpendicular to their own direction vector, which acts as their
local Z axis, and their local Y axis coplanar with their own direction vector
and the common up vector (see [common up vector], page 128). The billboard
never rotates to face the camera, you might use double-side material to en-
sure particles never culled by back-facing. Good for rings stack etc where the
particles will perpendicular to their traveling direction.

billboard origin
Specifying the point which acts as the origin point for all billboard particles, controls the
fine tuning of where a billboard particle appears in relation to it’s position.

format: billboard origin <top left|top center|top right|center left|center|center right|bottom left|bottom
Chapter 3: Scripts 127

example: billboard origin top right


default: center

The options for this parameter are:

top left The billboard origin is the top-left corner.


top center The billboard origin is the center of top edge.
top right The billboard origin is the top-right corner.
center left The billboard origin is the center of left edge.
center The billboard origin is the center.
center right
The billboard origin is the center of right edge.
bottom left
The billboard origin is the bottom-left corner.
bottom center
The billboard origin is the center of bottom edge.
bottom right
The billboard origin is the bottom-right corner.

billboard rotation type


By default, billboard particles will rotate the texture coordinates to according with particle
rotation. But rotate texture coordinates has some disadvantage, e.g. the corners of the
texture will lost after rotate, and the corners of the billboard will fill with unwanted texture
area when using wrap address mode or sub-texture sampling. This settings allow you
specifying other rotation type.

format: billboard rotation type <vertex|texcoord>


example: billboard rotation type vertex
default: texcoord

The options for this parameter are:

vertex Billboard particles will rotate the vertices around their facing direction to ac-
cording with particle rotation. Rotate vertices guarantee texture corners exactly
match billboard corners, thus has advantage mentioned above, but should take
more time to generate the vertices.
texcoord Billboard particles will rotate the texture coordinates to according with particle
rotation. Rotate texture coordinates is faster than rotate vertices, but has some
disadvantage mentioned above.
Chapter 3: Scripts 128

common direction
Only required if [billboard type], page 125 is set to oriented common or perpendicu-
lar common, this vector is the common direction vector used to orient all particles in the
system.

format: common direction <x> <y> <z>


example: common direction 0 -1 0
default: 0 0 1

See also: Section 3.3.2 [Particle Emitters], page 130, Section 3.3.5 [Particle Affectors],
page 137

common up vector
Only required if [billboard type], page 125 is set to perpendicular self or perpendicu-
lar common, this vector is the common up vector used to orient all particles in the system.

format: common up vector <x> <y> <z>


example: common up vector 0 1 0
default: 0 1 0

See also: Section 3.3.2 [Particle Emitters], page 130, Section 3.3.5 [Particle Affectors],
page 137

point rendering
This is actually an attribute of the ’billboard’ particle renderer (the default), and sets
whether or not the BillboardSet will use point rendering rather than manually generated
quads.

By default a BillboardSet is rendered by generating geometry for a textured quad in


memory, taking into account the size and orientation settings, and uploading it to the
video card. The alternative is to use hardware point rendering, which means that only one
position needs to be sent per billboard rather than 4 and the hardware sorts out how this
is rendered based on the render state.

Using point rendering is faster than generating quads manually, but is more restrictive.
The following restrictions apply:
• Only the ’point’ orientation type is supported
• Size and appearance of each particle is controlled by the material pass ([point size],
page 44, [point size attenuation], page 45, [point sprites], page 44)
Chapter 3: Scripts 129

• Per-particle size is not supported (stems from the above)


• Per-particle rotation is not supported, and this can only be controlled through texture
unit rotation in the material definition
• Only ’center’ origin is supported
• Some drivers have an upper limit on the size of points they support - this can even
vary between APIs on the same card! Don’t rely on point sizes that cause the point
sprites to get very large on screen, since they may get clamped on some cards. Upper
sizes can range from 64 to 256 pixels.

You will almost certainly want to enable in your material pass both point attenuation
and point sprites if you use this option.

accurate facing
This is actually an attribute of the ’billboard’ particle renderer (the default), and sets
whether or not the BillboardSet will use a slower but more accurate calculation for facing
the billboard to the camera. Bt default it uses the camera direction, which is faster but
means the billboards don’t stay in the same orientation as you rotate the camera. The
’accurate facing true’ option makes the calculation based on a vector from each billboard
to the camera, which means the orientation is constant even whilst the camera rotates.

format: accurate facing on|off


default: accurate facing off 0

iteration interval
Usually particle systems are updated based on the frame rate; however this can give variable
results with more extreme frame rate ranges, particularly at lower frame rates. You can use
this option to make the update frequency a fixed interval, whereby at lower frame rates,
the particle update will be repeated at the fixed interval until the frame time is used up. A
value of 0 means the default frame time iteration.

format: iteration interval <secs>


example: iteration interval 0.01
default: iteration interval 0
Chapter 3: Scripts 130

nonvisible update timeout


Sets when the particle system should stop updating after it hasn’t been visible for a while.
By default, visible particle systems update all the time, even when not in view. This means
that they are guaranteed to be consistent when they do enter view. However, this comes at
a cost, updating particle systems can be expensive, especially if they are perpetual.

This option lets you set a ’timeout’ on the particle system, so that if it isn’t visible for
this amount of time, it will stop updating until it is next visible. A value of 0 disables the
timeout and always updates.

format: nonvisible update timeout <secs>


example: nonvisible update timeout 10
default: nonvisible update timeout 0

3.3.2 Particle Emitters


Particle emitters are classified by ’type’ e.g. ’Point’ emitters emit from a single point
whilst ’Box’ emitters emit randomly from an area. New emitters can be added to Ogre
by creating plugins. You add an emitter to a system by nesting another section within
it, headed with the keyword ’emitter’ followed by the name of the type of emitter (case
sensitive). Ogre currently supports ’Point’, ’Box’, ’Cylinder’, ’Ellipsoid’, ’HollowEllipsoid’
and ’Ring’ emitters.

It is also possible to ’emit emitters’ - that is, have new emitters spawned based on the
position of particles. See [Emitting Emitters], page 137

Particle Emitter Universal Attributes


• [angle], page 131
• [colour], page 131
• [colour range start], page 131
• [colour range end], page 131
• [direction], page 132
• [emission rate], page 132
• [position], page 132
• [velocity], page 132
• [velocity min], page 133
• [velocity max], page 133
• [time to live], page 133
• [time to live min], page 133
Chapter 3: Scripts 131

• [time to live max], page 133


• [duration], page 133
• [duration min], page 134
• [duration max], page 134
• [repeat delay], page 134
• [repeat delay min], page 134
• [repeat delay max], page 134

See also: Section 3.3 [Particle Scripts], page 121, Section 3.3.5 [Particle Affectors], page 137

3.3.3 Particle Emitter Attributes


This section describes the common attributes of all particle emitters. Specific emitter types
may also support their own extra attributes.

angle
Sets the maximum angle (in degrees) which emitted particles may deviate from the direction
of the emitter (see direction). Setting this to 10 allows particles to deviate up to 10 degrees
in any direction away from the emitter’s direction. A value of 180 means emit in any
direction, whilst 0 means emit always exactly in the direction of the emitter.

format: angle <degrees>


example: angle 30
default: 0

colour
Sets a static colour for all particle emitted. Also see the colour range start and
colour range end attributes for setting a range of colours. The format of the colour
parameter is "r g b a", where each component is a value from 0 to 1, and the alpha value
is optional (assumes 1 if not specified).

format: colour <r> <g> <b> [<a>]


example: colour 1 0 0 1
default: 1 1 1 1

colour range start & colour range end


As the ’colour’ attribute, except these 2 attributes must be specified together, and indicate
the range of colours available to emitted particles. The actual colour will be randomly
Chapter 3: Scripts 132

chosen between these 2 values.

format: as colour
example (generates random colours between red and blue):
colour range start 1 0 0
colour range end 0 0 1
default: both 1 1 1 1

direction
Sets the direction of the emitter. This is relative to the SceneNode which the particle system
is attached to, meaning that as with other movable objects changing the orientation of the
node will also move the emitter.

format: direction <x> <y> <z>


example: direction 0 1 0
default: 1 0 0

emission rate
Sets how many particles per second should be emitted. The specific emitter does not have
to emit these in a continuous burst - this is a relative parameter and the emitter may choose
to emit all of the second’s worth of particles every half-second for example, the behaviour
depends on the emitter. The emission rate will also be limited by the particle system’s
’quota’ setting.

format: emission rate <particles per second>


example: emission rate 50
default: 10

position
Sets the position of the emitter relative to the SceneNode the particle system is attached
to.

format: position <x> <y> <z>


example: position 10 0 40
default: 0 0 0
Chapter 3: Scripts 133

velocity
Sets a constant velocity for all particles at emission time. See also the velocity min and
velocity max attributes which allow you to set a range of velocities instead of a fixed one.

format: velocity <world units per second>


example: velocity 100
default: 1

velocity min & velocity max


As ’velocity’ except these attributes set a velocity range and each particle is emitted with
a random velocity within this range.

format: as velocity
example:
velocity min 50
velocity max 100
default: both 1

time to live
Sets the number of seconds each particle will ’live’ for before being destroyed. NB it is
possible for particle affectors to alter this in flight, but this is the value given to particles
on emission. See also the time to live min and time to live max attributes which let you
set a lifetime range instead of a fixed one.

format: time to live <seconds>


example: time to live 10
default: 5

time to live min & time to live max


As time to live, except this sets a range of lifetimes and each particle gets a random value
in-between on emission.

format: as time to live


example:
time to live min 2
time to live max 5
default: both 5
Chapter 3: Scripts 134

duration
Sets the number of seconds the emitter is active. The emitter can be started again, see
[repeat delay], page 134. A value of 0 means infinite duration. See also the duration min
and duration max attributes which let you set a duration range instead of a fixed one.

format: duration <seconds>


example:
duration 2.5
default: 0

duration min & duration max


As duration, except these attributes set a variable time range between the min and max
values each time the emitter is started.

format: as duration
example:
duration min 2
duration max 5
default: both 0

repeat delay
Sets the number of seconds to wait before the emission is repeated when stopped by a limited
[duration], page 133. See also the repeat delay min and repeat delay max attributes which
allow you to set a range of repeat delays instead of a fixed one.

format: repeat delay <seconds>


example:
repeat delay 2.5
default: 0

repeat delay min & repeat delay max


As repeat delay, except this sets a range of repeat delays and each time the emitter is
started it gets a random value in-between.

format: as repeat delay


example:
Chapter 3: Scripts 135

repeat delay 2
repeat delay 5
default: both 0

See also: Section 3.3.4 [Standard Particle Emitters], page 135, Section 3.3 [Particle
Scripts], page 121, Section 3.3.5 [Particle Affectors], page 137

3.3.4 Standard Particle Emitters


Ogre comes preconfigured with a few particle emitters. New ones can be added by creating
plugins: see the Plugin ParticleFX project as an example of how you would do this (this is
where these emitters are implemented).
• [Point Emitter], page 135
• [Box Emitter], page 135
• [Cylinder Emitter], page 136
• [Ellipsoid Emitter], page 136
• [Hollow Ellipsoid Emitter], page 136
• [Ring Emitter], page 137

Point Emitter
This emitter emits particles from a single point, which is it’s position. This emitter has no
additional attributes over an above the standard emitter attributes.

To create a point emitter, include a section like this within your particle system script:

emitter Point
{
// Settings go here
}

Please note that the name of the emitter (’Point’) is case-sensitive.

Box Emitter
This emitter emits particles from a random location within a 3-dimensional box. It’s extra
attributes are:

width Sets the width of the box (this is the size of the box along it’s local X axis,
which is dependent on the ’direction’ attribute which forms the box’s local Z).
Chapter 3: Scripts 136

format: width <units>


example: width 250
default: 100

height Sets the height of the box (this is the size of the box along it’s local Y axis,
which is dependent on the ’direction’ attribute which forms the box’s local Z).
format: height <units>
example: height 250
default: 100

depth Sets the depth of the box (this is the size of the box along it’s local Z axis,
which is the same as the ’direction’ attribute).
format: depth <units>
example: depth 250
default: 100

To create a box emitter, include a section like this within your particle system script:
emitter Box
{
// Settings go here
}

Cylinder Emitter
This emitter emits particles in a random direction from within a cylinder area, where the
cylinder is oriented along the Z-axis. This emitter has exactly the same parameters as the
[Box Emitter], page 135 so there are no additional parameters to consider here - the width
and height determine the shape of the cylinder along it’s axis (if they are different it is an
ellipsoid cylinder), the depth determines the length of the cylinder.

Ellipsoid Emitter
This emitter emits particles from within an ellipsoid shaped area, i.e. a sphere or squashed-
sphere area. The parameters are again identical to the [Box Emitter], page 135, except that
the dimensions describe the widest points along each of the axes.

Hollow Ellipsoid Emitter


This emitter is just like [Ellipsoid Emitter], page 136 except that there is a hollow area in
the centre of the ellipsoid from which no particles are emitted. Therefore it has 3 extra
parameters in order to define this area:
inner width
The width of the inner area which does not emit any particles.
inner height
The height of the inner area which does not emit any particles.
Chapter 3: Scripts 137

inner depth
The depth of the inner area which does not emit any particles.

Ring Emitter
This emitter emits particles from a ring-shaped area, i.e. a little like [Hollow Ellipsoid
Emitter], page 136 except only in 2 dimensions.
inner width
The width of the inner area which does not emit any particles.
inner height
The height of the inner area which does not emit any particles.

See also: Section 3.3 [Particle Scripts], page 121, Section 3.3.2 [Particle Emitters],
page 130

Emitting Emitters
It is possible to spawn new emitters on the expiry of particles, for example to product
’firework’ style effects. This is controlled via the following directives:
emit emitter quota
This parameter is a system-level parameter telling the system how many emitted
emitters may be in use at any one time. This is just to allow for the space
allocation process.
name This parameter is an emitter-level parameter, giving a name to an emitter. This
can then be referred to in another emitter as the new emitter type to spawn
when an emitted particle dies.
emit emitter
This is an emitter-level parameter, and if specified, it means that when particles
emitted by this emitter die, they spawn a new emitter of the named type.

3.3.5 Particle Affectors


Particle affectors modify particles over their lifetime. They are classified by ’type’ e.g.
’LinearForce’ affectors apply a force to all particles, whilst ’ColourFader’ affectors alter the
colour of particles in flight. New affectors can be added to Ogre by creating plugins. You
add an affector to a system by nesting another section within it, headed with the keyword
’affector’ followed by the name of the type of affector (case sensitive). Ogre currently
supports ’LinearForce’ and ’ColourFader’ affectors.

Particle affectors actually have no universal attributes; they are all specific to the type
of affector.
Chapter 3: Scripts 138

See also: Section 3.3.6 [Standard Particle Affectors], page 138, Section 3.3 [Particle
Scripts], page 121, Section 3.3.2 [Particle Emitters], page 130

3.3.6 Standard Particle Affectors


Ogre comes preconfigured with a few particle affectors. New ones can be added by creating
plugins: see the Plugin ParticleFX project as an example of how you would do this (this is
where these affectors are implemented).
• [Linear Force Affector], page 138
• [ColourFader Affector], page 139
• [ColourFader2 Affector], page 139
• [Scaler Affector], page 141
• [Rotator Affector], page 141
• [ColourInterpolator Affector], page 142
• [ColourImage Affector], page 143
• [DeflectorPlane Affector], page 143
• [DirectionRandomiser Affector], page 144

Linear Force Affector


This affector applies a force vector to all particles to modify their trajectory. Can be used
for gravity, wind, or any other linear force. It’s extra attributes are:

force vector
Sets the vector for the force to be applied to every particle. The magnitude of
this vector determines how strong the force is.
format: force vector <x> <y> <z>
example: force vector 50 0 -50
default: 0 -100 0 (a fair gravity effect)

force application
Sets the way in which the force vector is applied to particle momentum.
format: force application <add|average>
example: force application average
default: add
The options are:
average The resulting momentum is the average of the force vector and the
particle’s current motion. Is self-stabilising but the speed at which
the particle changes direction is non-linear.
add The resulting momentum is the particle’s current motion plus the
force vector. This is traditional force acceleration but can poten-
tially result in unlimited velocity.

To create a linear force affector, include a section like this within your particle system script:
Chapter 3: Scripts 139

affector LinearForce
{
// Settings go here
}
Please note that the name of the affector type (’LinearForce’) is case-sensitive.

ColourFader Affector
This affector modifies the colour of particles in flight. It’s extra attributes are:
red Sets the adjustment to be made to the red component of the particle colour per
second.
format: red <delta value>
example: red -0.1
default: 0

green Sets the adjustment to be made to the green component of the particle colour
per second.
format: green <delta value>
example: green -0.1
default: 0

blue Sets the adjustment to be made to the blue component of the particle colour
per second.
format: blue <delta value>
example: blue -0.1
default: 0

alpha Sets the adjustment to be made to the alpha component of the particle colour
per second.
format: alpha <delta value>
example: alpha -0.1
default: 0

To create a colour fader affector, include a section like this within your particle system
script:
affector ColourFader
{
// Settings go here
}

ColourFader2 Affector
This affector is similar to the [ColourFader Affector], page 139, except it introduces two
states of colour changes as opposed to just one. The second colour change state is activated
once a specified amount of time remains in the particles life.
Chapter 3: Scripts 140

red1 Sets the adjustment to be made to the red component of the particle colour per
second for the first state.
format: red <delta value>
example: red -0.1
default: 0

green1 Sets the adjustment to be made to the green component of the particle colour
per second for the first state.
format: green <delta value>
example: green -0.1
default: 0

blue1 Sets the adjustment to be made to the blue component of the particle colour
per second for the first state.
format: blue <delta value>
example: blue -0.1
default: 0

alpha1 Sets the adjustment to be made to the alpha component of the particle colour
per second for the first state.
format: alpha <delta value>
example: alpha -0.1
default: 0

red2 Sets the adjustment to be made to the red component of the particle colour per
second for the second state.
format: red <delta value>
example: red -0.1
default: 0

green2 Sets the adjustment to be made to the green component of the particle colour
per second for the second state.
format: green <delta value>
example: green -0.1
default: 0

blue2 Sets the adjustment to be made to the blue component of the particle colour
per second for the second state.
format: blue <delta value>
example: blue -0.1
default: 0

alpha2 Sets the adjustment to be made to the alpha component of the particle colour
per second for the second state.
Chapter 3: Scripts 141

format: alpha <delta value>


example: alpha -0.1
default: 0

state change
When a particle has this much time left to live, it will switch to state 2.
format: state change <seconds>
example: state change 2
default: 1

To create a ColourFader2 affector, include a section like this within your particle system
script:
affector ColourFader2
{
// Settings go here
}

Scaler Affector
This affector scales particles in flight. It’s extra attributes are:
rate The amount by which to scale the particles in both the x and y direction per
second.
To create a scale affector, include a section like this within your particle system script:
affector Scaler
{
// Settings go here
}

Rotator Affector
This affector rotates particles in flight. This is done by rotating the texture. It’s extra
attributes are:
rotation speed range start
The start of a range of rotation speeds to be assigned to emitted particles.
format: rotation speed range start <degrees per second>
example: rotation speed range start 90
default: 0

rotation speed range end


The end of a range of rotation speeds to be assigned to emitted particles.
format: rotation speed range end <degrees per second>
example: rotation speed range end 180
default: 0
Chapter 3: Scripts 142

rotation range start


The start of a range of rotation angles to be assigned to emitted particles.
format: rotation range start <degrees>
example: rotation range start 0
default: 0

rotation range end


The end of a range of rotation angles to be assigned to emitted particles.
format: rotation range end <degrees>
example: rotation range end 360
default: 0

To create a rotate affector, include a section like this within your particle system script:
affector Rotator
{
// Settings go here
}

ColourInterpolator Affector
Similar to the ColourFader and ColourFader2 Affector?s, this affector modifies the colour
of particles in flight, except it has a variable number of defined stages. It swaps the particle
colour for several stages in the life of a particle and interpolates between them. It’s extra
attributes are:
time0 The point in time of stage 0.
format: time0 <0-1 based on lifetime>
example: time0 0
default: 1

colour0 The colour at stage 0.


format: colour0 <r> <g> <b> [<a>]
example: colour0 1 0 0 1
default: 0.5 0.5 0.5 0.0

time1 The point in time of stage 1.


format: time1 <0-1 based on lifetime>
example: time1 0.5
default: 1

colour1 The colour at stage 1.


format: colour1 <r> <g> <b> [<a>]
example: colour1 0 1 0 1
default: 0.5 0.5 0.5 0.0
Chapter 3: Scripts 143

time2 The point in time of stage 2.


format: time2 <0-1 based on lifetime>
example: time2 1
default: 1

colour2 The colour at stage 2.


format: colour2 <r> <g> <b> [<a>]
example: colour2 0 0 1 1
default: 0.5 0.5 0.5 0.0

[...]
The number of stages is variable. The maximal number of stages is 6; where time5 and
colour5 are the last possible parameters. To create a colour interpolation affector, include
a section like this within your particle system script:
affector ColourInterpolator
{
// Settings go here
}

ColourImage Affector
This is another affector that modifies the colour of particles in flight, but instead of pro-
grammatically defining colours, the colours are taken from a specified image file. The range
of colour values begins from the left side of the image and move to the right over the life-
time of the particle, therefore only the horizontal dimension of the image is used. Its extra
attributes are:
image The start of a range of rotation speed to be assigned to emitted particles.
format: image <image name>
example: image rainbow.png
default: none

To create a ColourImage affector, include a section like this within your particle system
script:
affector ColourImage
{
// Settings go here
}

DeflectorPlane Affector
This affector defines a plane which deflects particles which collide with it. The attributes
are:
plane point
A point on the deflector plane. Together with the normal vector it defines the
plane.
Chapter 3: Scripts 144

default: plane point 0 0 0

plane normal
The normal vector of the deflector plane. Together with the point it defines the
plane.
default: plane normal 0 1 0

bounce The amount of bouncing when a particle is deflected. 0 means no deflection


and 1 stands for 100 percent reflection.
default: bounce 1.0

DirectionRandomiser Affector
This affector applies randomness to the movement of the particles. Its extra attributes are:
randomness
The amount of randomness to introduce in each axial direction.
example: randomness 5
default: randomness 1

scope The percentage of particles affected in each run of the affector.


example: scope 0.5
default: scope 1.0

keep velocity
Determines whether the velocity of particles is unchanged.
example: keep velocity true
default: keep velocity false

3.4 Overlay Scripts


Overlay scripts offer you the ability to define overlays in a script which can be reused
easily. Whilst you could set up all overlays for a scene in code using the methods of the
SceneManager, Overlay and OverlayElement classes, in practice it’s a bit unwieldy. Instead
you can store overlay definitions in text files which can then be loaded whenever required.

Loading scripts
Overlay scripts are loaded at initialisation time by the system: by default it looks in all
common resource locations (see Root::addResourceLocation) for files with the ’.overlay’
extension and parses them. If you want to parse files with a different extension, use the
OverlayManager::getSingleton().parseAllSources method with your own extension, or if you
want to parse an individual file, use OverlayManager::getSingleton().parseScript.
Chapter 3: Scripts 145

Format
Several overlays may be defined in a single script. The script format is pseudo-C++, with
sections delimited by curly braces (), comments indicated by starting a line with ’//’ (note,
no nested form comments allowed), and inheritance through the use of templates. The
general format is shown below in a typical example:
// The name of the overlay comes first
MyOverlays/ANewOverlay
{
zorder 200

container Panel(MyOverlayElements/TestPanel)
{
// Center it horizontally, put it at the top
left 0.25
top 0
width 0.5
height 0.1
material MyMaterials/APanelMaterial

// Another panel nested in this one


container Panel(MyOverlayElements/AnotherPanel)
{
left 0
top 0
width 0.1
height 0.1
material MyMaterials/NestedPanel
}
}

}
The above example defines a single overlay called ’MyOverlays/ANewOverlay’, with
2 panels in it, one nested under the other. It uses relative metrics (the default if no
metrics mode option is found).

Every overlay in the script must be given a name, which is the line before the first
opening ’’. This name must be globally unique. It can include path characters (as in the
example) to logically divide up your overlays, and also to avoid duplicate names, but the
engine does not treat the name a hierarchical, just as a string. Within the braces are the
properties of the overlay, and any nested elements. The overlay itself only has a single
property ’zorder’ which determines how ’high’ it is in the stack of overlays if more than one
is displayed at the same time. Overlays with higher zorder values are displayed on top.
Chapter 3: Scripts 146

Adding elements to the overlay


Within an overlay, you can include any number of 2D or 3D elements. You do this by
defining a nested block headed by:
’element’ if you want to define a 2D element which cannot have children of it’s own
’container’ if you want to define a 2D container object (which may itself have nested con-
tainers or elements)

The element and container blocks are pretty identical apart from their ability to store nested
blocks.

’container’ / ’element’ blocks


These are delimited by curly braces. The format for the header preceding the first brace is:

[container | element] <type name> ( <instance name>) [: <template name>]


...

type name
Must resolve to the name of a OverlayElement type which has been registered
with the OverlayManager. Plugins register with the OverlayManager to ad-
vertise their ability to create elements, and at this time advertise the name of
the type. OGRE comes preconfigured with types ’Panel’, ’BorderPanel’ and
’TextArea’.
instance name
Must be a name unique among all other elements / containers by which to
identify the element. Note that you can obtain a pointer to any named element
by calling OverlayManager::getSingleton().getOverlayElement(name).
template name
Optional template on which to base this item. See templates.
The properties which can be included within the braces depend on the custom type.
However the following are always valid:
• [metrics mode], page 149
• [horz align], page 149
• [vert align], page 150
• [left], page 150
• [top], page 151
• [width], page 151
• [height], page 151
• [overlay material], page 152
• [caption], page 152
Chapter 3: Scripts 147

Templates
You can use templates to create numerous elements with the same properties. A template is
an abstract element and it is not added to an overlay. It acts as a base class that elements can
inherit and get its default properties. To create a template, the keyword ’template’ must be
the first word in the element definition (before container or element). The template element
is created in the topmost scope - it is NOT specified in an Overlay. It is recommended that
you define templates in a separate overlay though this is not essential. Having templates
defined in a separate file will allow different look & feels to be easily substituted.

Elements can inherit a template in a similar way to C++ inheritance - by using the :
operator on the element definition. The : operator is placed after the closing bracket of the
name (separated by a space). The name of the template to inherit is then placed after the
: operator (also separated by a space).

A template can contain template children which are created when the template is sub-
classed and instantiated. Using the template keyword for the children of a template is
optional but recommended for clarity, as the children of a template are always going to be
templates themselves.

template container BorderPanel(MyTemplates/BasicBorderPanel)


{
left 0
top 0
width 1
height 1

// setup the texture UVs for a borderpanel

// do this in a template so it doesn’t need to be redone everywhere


material Core/StatsBlockCenter
border_size 0.05 0.05 0.06665 0.06665
border_material Core/StatsBlockBorder
border_topleft_uv 0.0000 1.0000 0.1914 0.7969
border_top_uv 0.1914 1.0000 0.8086 0.7969
border_topright_uv 0.8086 1.0000 1.0000 0.7969
border_left_uv 0.0000 0.7969 0.1914 0.2148
border_right_uv 0.8086 0.7969 1.0000 0.2148
border_bottomleft_uv 0.0000 0.2148 0.1914 0.0000
border_bottom_uv 0.1914 0.2148 0.8086 0.0000
border_bottomright_uv 0.8086 0.2148 1.0000 0.0000
}
template container Button(MyTemplates/BasicButton) : MyTemplates/BasicBorderPanel
{
Chapter 3: Scripts 148

left 0.82
top 0.45
width 0.16
height 0.13
material Core/StatsBlockCenter
border_up_material Core/StatsBlockBorder/Up
border_down_material Core/StatsBlockBorder/Down
}
template element TextArea(MyTemplates/BasicText)
{
font_name Ogre
char_height 0.08
colour_top 1 1 0
colour_bottom 1 0.2 0.2
left 0.03
top 0.02
width 0.12
height 0.09
}

MyOverlays/AnotherOverlay
{
zorder 490
container BorderPanel(MyElements/BackPanel) : MyTemplates/BasicBorderPanel
{
left 0
top 0
width 1
height 1

container Button(MyElements/HostButton) : MyTemplates/BasicButton


{
left 0.82
top 0.45
caption MyTemplates/BasicText HOST
}

container Button(MyElements/JoinButton) : MyTemplates/BasicButton


{
left 0.82
top 0.60
caption MyTemplates/BasicText JOIN
}
}
}
Chapter 3: Scripts 149

The above example uses templates to define a button. Note that the button template
inherits from the borderPanel template. This reduces the number of attributes needed to
instantiate a button.

Also note that the instantiate of a Button needs a template name for the caption at-
tribute. So templates can also be used by elements that need dynamic creation of children
elements (the button creates a TextAreaElement in this case for its caption).

See Section 3.4.1 [OverlayElement Attributes], page 149, Section 3.4.2 [Standard Over-
layElements], page 153

3.4.1 OverlayElement Attributes


These attributes are valid within the braces of a ’container’ or ’element’ block in an overlay
script. They must each be on their own line. Ordering is unimportant.

metrics mode
Sets the units which will be used to size and position this element.

Format: metrics mode <pixels|relative>


Example: metrics mode pixels

This can be used to change the way that all measurement attributes in the rest of this
element are interpreted. In relative mode, they are interpreted as being a parametric value
from 0 to 1, as a proportion of the width / height of the screen. In pixels mode, they are
simply pixel offsets.

Default: metrics mode relative

horz align
Sets the horizontal alignment of this element, in terms of where the horizontal origin is.

Format: horz align <left|center|right>


Example: horz align center

This can be used to change where the origin is deemed to be for the purposes of any
horizontal positioning attributes of this element. By default the origin is deemed to be the
Chapter 3: Scripts 150

left edge of the screen, but if you change this you can center or right-align your elements.
Note that setting the alignment to center or right does not automatically force your elements
to appear in the center or the right edge, you just have to treat that point as the origin
and adjust your coordinates appropriately. This is more flexible because you can choose to
position your element anywhere relative to that origin. For example, if your element was
10 pixels wide, you would use a ’left’ property of -10 to align it exactly to the right edge,
or -20 to leave a gap but still make it stick to the right edge.

Note that you can use this property in both relative and pixel modes, but it is most
useful in pixel mode.

Default: horz align left

vert align
Sets the vertical alignment of this element, in terms of where the vertical origin is.

Format: vert align <top|center|bottom>


Example: vert align center

This can be used to change where the origin is deemed to be for the purposes of any
vertical positioning attributes of this element. By default the origin is deemed to be the
top edge of the screen, but if you change this you can center or bottom-align your elements.
Note that setting the alignment to center or bottom does not automatically force your
elements to appear in the center or the bottom edge, you just have to treat that point as
the origin and adjust your coordinates appropriately. This is more flexible because you
can choose to position your element anywhere relative to that origin. For example, if your
element was 50 pixels high, you would use a ’top’ property of -50 to align it exactly to the
bottom edge, or -70 to leave a gap but still make it stick to the bottom edge.

Note that you can use this property in both relative and pixel modes, but it is most
useful in pixel mode.

Default: vert align top

left
Sets the horizontal position of the element relative to it’s parent.
Chapter 3: Scripts 151

Format: left <value>


Example: left 0.5

Positions are relative to the parent (the top-left of the screen if the parent is an overlay,
the top-left of the parent otherwise) and are expressed in terms of a proportion of screen
size. Therefore 0.5 is half-way across the screen.

Default: left 0

top
Sets the vertical position of the element relative to it’s parent.

Format: top <value>


Example: top 0.5

Positions are relative to the parent (the top-left of the screen if the parent is an overlay,
the top-left of the parent otherwise) and are expressed in terms of a proportion of screen
size. Therefore 0.5 is half-way down the screen.

Default: top 0

width
Sets the width of the element as a proportion of the size of the screen.

Format: width <value>


Example: width 0.25

Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are
not relative to the parent; this is common in windowing systems where the top and left are
relative but the size is absolute.

Default: width 1
Chapter 3: Scripts 152

height
Sets the height of the element as a proportion of the size of the screen.

Format: height <value>


Example: height 0.25

Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are
not relative to the parent; this is common in windowing systems where the top and left are
relative but the size is absolute.

Default: height 1

material
Sets the name of the material to use for this element.

Format: material <name>


Example: material Examples/TestMaterial

This sets the base material which this element will use. Each type of element may inter-
pret this differently; for example the OGRE element ’Panel’ treats this as the background
of the panel, whilst ’BorderPanel’ interprets this as the material for the center area only.
Materials should be defined in .material scripts.

Note that using a material in an overlay element automatically disables lighting and depth
checking on this material. Therefore you should not use the same material as is used for
real 3D objects for an overlay.

Default: none

caption
Sets a text caption for the element.

Format: caption <string>


Example: caption This is a caption
Chapter 3: Scripts 153

Not all elements support captions, so each element is free to disregard this if it wants.
However, a general text caption is so common to many elements that it is included in the
generic interface to make it simpler to use. This is a common feature in GUI systems.

Default: blank

rotation
Sets the rotation of the element.

Format: rotation <angle in degrees> <axis x> <axis y> <axis z> Example: rotation 30
001
Default: none

3.4.2 Standard OverlayElements


Although OGRE’s OverlayElement and OverlayContainer classes are designed to be ex-
tended by applications developers, there are a few elements which come as standard with
Ogre. These include:
• [Panel], page 153
• [BorderPanel], page 154
• [TextArea], page 154

This section describes how you define their custom attributes in an .overlay script, but
you can also change these custom properties in code if you wish. You do this by calling
setParameter(paramname, value). You may wish to use the StringConverter class to convert
your types to and from strings.

Panel (container)
This is the most bog-standard container you can use. It is a rectangular area which can
contain other elements (or containers) and may or may not have a background, which can
be tiled however you like. The background material is determined by the material attribute,
but is only displayed if transparency is off.

Attributes:
transparent <true | false>
If set to ’true’ the panel is transparent and is not rendered itself, it is just used
as a grouping level for it’s children.
tiling <layer> <x tile> <y tile>
Sets the number of times the texture(s) of the material are tiled across the panel
in the x and y direction. <layer> is the texture layer, from 0 to the number of
Chapter 3: Scripts 154

texture layers in the material minus one. By setting tiling per layer you can
create some nice multitextured backdrops for your panels, this works especially
well when you animate one of the layers.
uv coords <topleft u> <topleft v> <bottomright u> <bottomright v>
Sets the texture coordinates to use for this panel.

BorderPanel (container)
This is a slightly more advanced version of Panel, where instead of just a single flat panel, the
panel has a separate border which resizes with the panel. It does this by taking an approach
very similar to the use of HTML tables for bordered content: the panel is rendered as 9
square areas, with the center area being rendered with the main material (as with Panel)
and the outer 8 areas (the 4 corners and the 4 edges) rendered with a separate border
material. The advantage of rendering the corners separately from the edges is that the edge
textures can be designed so that they can be stretched without distorting them, meaning
the single texture can serve any size panel.

Attributes:
border size <left> <right> <top> <bottom>
The size of the border at each edge, as a proportion of the size of the screen.
This lets you have different size borders at each edge if you like, or you can use
the same value 4 times to create a constant size border.
border material <name>
The name of the material to use for the border. This is normally a different
material to the one used for the center area, because the center area is often
tiled which means you can’t put border areas in there. You must put all the
images you need for all the corners and the sides into a single texture.
border topleft uv <u1> <v1> <u2> <v2>
[also border topright uv, border bottomleft uv, border bottomright uv]; The
texture coordinates to be used for the corner areas of the border. 4 coordinates
are required, 2 for the top-left corner of the square, 2 for the bottom-right of
the square.
border left uv <u1> <v1> <u2> <v2>
[also border right uv, border top uv, border bottom uv]; The texture coordi-
nates to be used for the edge areas of the border. 4 coordinates are required,
2 for the top-left corner, 2 for the bottom-right. Note that you should design
the texture so that the left & right edges can be stretched / squashed vertically
and the top and bottom edges can be stretched / squashed horizontally without
detrimental effects.

TextArea (element)
This is a generic element that you can use to render text. It uses fonts which can be defined
in code using the FontManager and Font classes, or which have been predefined in .fontdef
files. See the font definitions section for more information.
Chapter 3: Scripts 155

Attributes:
font name <name>
The name of the font to use. This font must be defined in a .fontdef file to
ensure it is available at scripting time.
char height <height>
The height of the letters as a proportion of the screen height. Character widths
may vary because OGRE supports proportional fonts, but will be based on this
constant height.
colour <red> <green> <blue>
A solid colour to render the text in. Often fonts are defined in monochrome, so
this allows you to colour them in nicely and use the same texture for multiple
different coloured text areas. The colour elements should all be expressed as
values between 0 and 1. If you use predrawn fonts which are already full colour
then you don’t need this.
colour bottom <red> <green> <blue> / colour top <red> <green> <blue>
As an alternative to a solid colour, you can colour the text differently at the
top and bottom to create a gradient colour effect which can be very effective.
alignment <left | center | right>
Sets the horizontal alignment of the text. This is different from the horz align
parameter.
space width <width>
Sets the width of a space in relation to the screen.

3.5 Font Definition Scripts


Ogre uses texture-based fonts to render the TextAreaOverlayElement. You can also use the
Font object for your own purpose if you wish. The final form of a font is a Material object
generated by the font, and a set of ’glyph’ (character) texture coordinate information.

There are 2 ways you can get a font into OGRE:


1. Design a font texture yourself using an art package or font generator tool
2. Ask OGRE to generate a font texture based on a truetype font
The former gives you the most flexibility and the best performance (in terms of startup
times), but the latter is convenient if you want to quickly use a font without having to
generate the texture yourself. I suggest prototyping using the latter and change to the
former for your final solution.

All font definitions are held in .fontdef files, which are parsed by the system at startup
time. Each .fontdef file can contain multiple font definitions. The basic format of an entry
in the .fontdef file is:
Chapter 3: Scripts 156

<font_name>
{
type <image | truetype>
source <image file | truetype font file>
...
... custom attributes depending on type
}

Using an existing font texture


If you have one or more artists working with you, no doubt they can produce you a very
nice font texture. OGRE supports full colour font textures, or alternatively you can keep
them monochrome / greyscale and use TextArea’s colouring feature. Font textures should
always have an alpha channel, preferably an 8-bit alpha channel such as that supported by
TGA and PNG files, because it can result in much nicer edges. To use an existing texture,
here are the settings you need:
type image
This just tells OGRE you want a pre-drawn font.
source <filename>
This is the name of the image file you want to load. This will be loaded from
the standard TextureManager resource locations and can be of any type OGRE
supports, although JPEG is not recommended because of the lack of alpha and
the lossy compression. I recommend PNG format which has both good lossless
compression and an 8-bit alpha channel.
glyph <character> <u1> <v1> <u2> <v2>
This provides the texture coordinates for the specified character. You must
repeat this for every character you have in the texture. The first 2 numbers
are the x and y of the top-left corner, the second two are the x and y of the
bottom-right corner. Note that you really should use a common height for all
characters, but widths can vary because of proportional fonts.

’character’ is either an ASCII character for non-extended 7-bit ASCII, or for


extended glyphs, a unicode decimal value, which is identified by preceding the
number with a ’u’ - e.g. ’u0546’ denotes unicode value 546.
A note for Windows users: I recommend using BitmapFontBuilder
(http://www.lmnopc.com/bitmapfontbuilder/), a free tool which will generate
a texture and export character widths for you, you can find a tool for converting the
binary output from this into ’glyph’ lines in the Tools folder.

Generating a font texture


You can also generate font textures on the fly using truetype fonts. I don’t recommend heavy
use of this in production work because rendering the texture can take a several seconds per
font which adds to the loading times. However it is a very nice way of quickly getting text
Chapter 3: Scripts 157

output in a font of your choice.

Here are the attributes you need to supply:


type truetype
Tells OGRE to generate the texture from a font
source <ttf file>
The name of the ttf file to load. This will be searched for in the common
resource locations and in any resource locations added to FontManager.
size <size in points>
The size at which to generate the font, in standard points. Note this only
affects how big the characters are in the font texture, not how big they are on
the screen. You should tailor this depending on how large you expect to render
the fonts because generating a large texture will result in blurry characters
when they are scaled very small (because of the mipmapping), and conversely
generating a small font will result in blocky characters if large text is rendered.
resolution <dpi>
The resolution in dots per inch, this is used in conjunction with the point size
to determine the final size. 72 / 96 dpi is normal.
antialias colour <true|false>
This is an optional flag, which defaults to ’false’. The generator will antialias
the font by default using the alpha component of the texture, which will look
fine if you use alpha blending to render your text (this is the default assumed
by TextAreaOverlayElement for example). If, however you wish to use a colour
based blend like add or modulate in your own code, you should set this to ’true’
so the colour values are anti-aliased too. If you set this to true and use alpha
blending, you’ll find the edges of your font are antialiased too quickly resulting
in a ’thin’ look to your fonts, because not only is the alpha blending the edges,
the colour is fading too. Leave this option at the default if in doubt.
code points nn-nn [nn-nn] ..
This directive allows you to specify which unicode code points should be gen-
erated as glyphs into the font texture. If you don’t specify this, code points
33-166 will be generated by default which covers the basic Latin 1 glyphs. If
you use this flag, you should specify a space-separated list of inclusive code
point ranges of the form ’start-end’. Numbers must be decimal.

You can also create new fonts at runtime by using the FontManager if you wish.
Chapter 4: Mesh Tools 158

4 Mesh Tools
There are a number of mesh tools available with OGRE to help you manipulate your meshes.

Section 4.1 [Exporters], page 158


For getting data out of modellers and into OGRE.
Section 4.2 [XmlConverter], page 159
For converting meshes and skeletons to/from XML.
Section 4.3 [MeshUpgrader], page 159
For upgrading binary meshes from one version of OGRE to another.

4.1 Exporters
Exporters are plugins to 3D modelling tools which write meshes and skeletal animation to
file formats which OGRE can use for realtime rendering. The files the exporters write end
in .mesh and .skeleton respectively.

Each exporter has to be written specifically for the modeller in question, although they all
use a common set of facilities provided by the classes MeshSerializer and SkeletonSerializer.
They also normally require you to own the modelling tool.

All the exporters here can be built from the source code, or you can download precom-
piled versions from the OGRE web site.

A Note About Modelling / Animation For OGRE


There are a few rules when creating an animated model for OGRE:
• You must have no more than 4 weighted bone assignments per vertex. If you have
more, OGRE will eliminate the lowest weighted assignments and re-normalise the other
weights. This limit is imposed by hardware blending limitations.
• All vertices must be assigned to at least one bone - assign static vertices to the root
bone.
• At the very least each bone must have a keyframe at the beginning and end of the
animation.

If you’re creating unanimated meshes, then you do not need to be concerned with the
above.
Full documentation for each exporter is provided along with the exporter itself,
and there is a list of the currently supported modelling tools in the OGRE Wiki at
http://www.ogre3d.org/wiki/index.php/Exporters.
Chapter 4: Mesh Tools 159

4.2 XmlConverter
The OgreXmlConverter tool can converter binary .mesh and .skeleton files to XML and back
again - this is a very useful tool for debugging the contents of meshes, or for exchanging
mesh data easily - many of the modeller mesh exporters export to XML because it is simpler
to do, and OgreXmlConverter can then produce a binary from it. Other than simplicity,
the other advantage is that OgreXmlConverter can generate additional information for the
mesh, like bounding regions and level-of-detail reduction.

Syntax:
Usage: OgreXMLConverter sourcefile [destfile]
sourcefile = name of file to convert
destfile = optional name of file to write to. If you don’t
specify this OGRE works it out through the extension
and the XML contents if the source is XML. For example
test.mesh becomes test.xml, test.xml becomes test.mesh
if the XML document root is <mesh> etc.
When converting XML to .mesh, you will be prompted to (re)generate level-of-
detail(LOD) information for the mesh - you can choose to skip this part if you wish, but
doing it will allow you to make your mesh reduce in detail automatically when it is loaded
into the engine. The engine uses a complex algorithm to determine the best parts of the
mesh to reduce in detail depending on many factors such as the curvature of the surface,
the edges of the mesh and seams at the edges of textures and smoothing groups - taking
advantage of it is advised to make your meshes more scalable in real scenes.

4.3 MeshUpgrader
This tool is provided to allow you to upgrade your meshes when the binary format changes
- sometimes we alter it to add new features and as such you need to keep your own assets
up to date. This tools has a very simple syntax:
OgreMeshUpgrade <oldmesh> <newmesh>
The OGRE release notes will notify you when this is necessary with a new release.
Chapter 5: Hardware Buffers 160

5 Hardware Buffers
Vertex buffers, index buffers and pixel buffers inherit most of their features from the Hard-
wareBuffer class. The general premise with a hardware buffer is that it is an area of memory
with which you can do whatever you like; there is no format (vertex or otherwise) associated
with the buffer itself - that is entirely up to interpretation by the methods that use it - in
that way, a HardwareBuffer is just like an area of memory you might allocate using ’malloc’
- the difference being that this memory is likely to be located in GPU or AGP memory.

5.1 The Hardware Buffer Manager


The HardwareBufferManager class is the factory hub of all the objects in the new ge-
ometry system. You create and destroy the majority of the objects you use to define
geometry through this class. It’s a Singleton, so you access it by doing HardwareBuffer-
Manager::getSingleton() - however be aware that it is only guaranteed to exist after the
RenderSystem has been initialised (after you call Root::initialise); this is because the ob-
jects created are invariably API-specific, although you will deal with them through one
common interface.

For example:
VertexDeclaration* decl = HardwareBufferManager::getSingleton().createVertexDeclaration();
HardwareVertexBufferSharedPtr vbuf =
HardwareBufferManager::getSingleton().createVertexBuffer(
3*sizeof(Real), // size of one whole vertex
numVertices, // number of vertices
HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
false); // no shadow buffer
Don’t worry about the details of the above, we’ll cover that in the later sections. The im-
portant thing to remember is to always create objects through the HardwareBufferManager,
don’t use ’new’ (it won’t work anyway in most cases).

5.2 Buffer Usage


Because the memory in a hardware buffer is likely to be under significant contention during
the rendering of a scene, the kind of access you need to the buffer over the time it is used
is extremely important; whether you need to update the contents of the buffer regularly,
whether you need to be able to read information back from it, these are all important
factors to how the graphics card manages the buffer. The method and exact parameters
used to create a buffer depends on whether you are creating an index or vertex buffer
(See Section 5.6 [Hardware Vertex Buffers], page 163 and See Section 5.7 [Hardware Index
Buffers], page 168), however one creation parameter is common to them both - the ’usage’.

The most optimal type of hardware buffer is one which is not updated often, and is never
read from. The usage parameter of createVertexBuffer or createIndexBuffer can be one of
the following:
Chapter 5: Hardware Buffers 161

HBU_STATIC
This means you do not need to update the buffer very often, but you might
occasionally want to read from it.
HBU_STATIC_WRITE_ONLY
This means you do not need to update the buffer very often, and you do not
need to read from it. However, you may read from it’s shadow buffer if you set
one up (See Section 5.3 [Shadow Buffers], page 161). This is the optimal buffer
usage setting.
HBU_DYNAMIC
This means you expect to update the buffer often, and that you may wish to
read from it. This is the least optimal buffer setting.
HBU_DYNAMIC_WRITE_ONLY
This means you expect to update the buffer often, but that you never want
to read from it. However, you may read from it’s shadow buffer if you set
one up (See Section 5.3 [Shadow Buffers], page 161). If you use this option,
and replace the entire contents of the buffer every frame, then you should
use HBU DYNAMIC WRITE ONLY DISCARDABLE instead, since that has
better performance characteristics on some platforms.
HBU_DYNAMIC_WRITE_ONLY_DISCARDABLE
This means that you expect to replace the entire contents of the buffer on an ex-
tremely regular basis, most likely every frame. By selecting this option, you free
the system up from having to be concerned about losing the existing contents of
the buffer at any time, because if it does lose them, you will be replacing them
next frame anyway. On some platforms this can make a significant performance
difference, so you should try to use this whenever you have a buffer you need
to update regularly. Note that if you create a buffer this way, you should use
the HBL DISCARD flag when locking the contents of it for writing.
Choosing the usage of your buffers carefully is important to getting optimal performance
out of your geometry. If you have a situation where you need to update a vertex buffer
often, consider whether you actually need to update all the parts of it, or just some. If it’s
the latter, consider using more than one buffer, with only the data you need to modify in
the HBU DYNAMIC buffer.

Always try to use the WRITE ONLY forms. This just means that you cannot read directly
from the hardware buffer, which is good practice because reading from hardware buffers is
very slow. If you really need to read data back, use a shadow buffer, described in the next
section.

5.3 Shadow Buffers


As discussed in the previous section, reading data back from a hardware buffer performs
very badly. However, if you have a cast-iron need to read the contents of the vertex buffer,
you should set the ’shadowBuffer’ parameter of createVertexBuffer or createIndexBuffer to
’true’. This causes the hardware buffer to be backed with a system memory copy, which
you can read from with no more penalty than reading ordinary memory. The catch is that
Chapter 5: Hardware Buffers 162

when you write data into this buffer, it will first update the system memory copy, then it
will update the hardware buffer, as separate copying process - therefore this technique has
an additional overhead when writing data. Don’t use it unless you really need it.

5.4 Locking buffers


In order to read or update a hardware buffer, you have to ’lock’ it. This performs 2 functions
- it tells the card that you want access to the buffer (which can have an effect on its rendering
queue), and it returns a pointer which you can manipulate. Note that if you’ve asked to
read the buffer (and remember, you really shouldn’t unless you’ve set the buffer up with
a shadow buffer), the contents of the hardware buffer will have been copied into system
memory somewhere in order for you to get access to it. For the same reason, when you’re
finished with the buffer you must unlock it; if you locked the buffer for writing this will
trigger the process of uploading the modified information to the graphics hardware.

Lock parameters
When you lock a buffer, you call one of the following methods:
// Lock the entire buffer
pBuffer->lock(lockType);
// Lock only part of the buffer
pBuffer->lock(start, length, lockType);
The first call locks the entire buffer, the second locks only the section from ’start’ (as
a byte offset), for ’length’ bytes. This could be faster than locking the entire buffer since
less is transferred, but not if you later update the rest of the buffer too, because doing it in
small chunks like this means you cannot use HBL DISCARD (see below).

The lockType parameter can have a large effect on the performance of your application,
especially if you are not using a shadow buffer.
HBL_NORMAL
This kind of lock allows reading and writing from the buffer - it’s also the least
optimal because basically you’re telling the card you could be doing anything at
all. If you’re not using a shadow buffer, it requires the buffer to be transferred
from the card and back again. If you’re using a shadow buffer the effect is
minimal.
HBL_READ_ONLY
This means you only want to read the contents of the buffer. Best used when
you created the buffer with a shadow buffer because in that case the data does
not have to be downloaded from the card.
HBL_DISCARD
This means you are happy for the card to discard the entire current contents
of the buffer. Implicitly this means you are not going to read the data - it
also means that the card can avoid any stalls if the buffer is currently being
Chapter 5: Hardware Buffers 163

rendered from, because it will actually give you an entirely different one. Use
this wherever possible when you are locking a buffer which was not created with
a shadow buffer. If you are using a shadow buffer it matters less, although with
a shadow buffer it’s preferable to lock the entire buffer at once, because that
allows the shadow buffer to use HBL DISCARD when it uploads the updated
contents to the real buffer.

HBL_NO_OVERWRITE
This is useful if you are locking just part of the buffer and thus cannot use
HBL DISCARD. It tells the card that you promise not to modify any section
of the buffer which has already been used in a rendering operation this frame.
Again this is only useful on buffers with no shadow buffer.

Once you have locked a buffer, you can use the pointer returned however you wish (just
don’t bother trying to read the data that’s there if you’ve used HBL DISCARD, or write
the data if you’ve used HBL READ ONLY). Modifying the contents depends on the type of
buffer, See Section 5.6 [Hardware Vertex Buffers], page 163 and See Section 5.7 [Hardware
Index Buffers], page 168

5.5 Practical Buffer Tips


The interplay of usage mode on creation, and locking options when reading / updating is
important for performance. Here’s some tips:
1. Aim for the ’perfect’ buffer by creating with HBU STATIC WRITE ONLY, with no
shadow buffer, and locking all of it once only with HBL DISCARD to populate it.
Never touch it again.
2. If you need to update a buffer regularly, you will have to compromise. Use
HBU DYNAMIC WRITE ONLY when creating (still no shadow buffer),
and use HBL DISCARD to lock the entire buffer, or if you can’t then use
HBL NO OVERWRITE to lock parts of it.
3. If you really need to read data from the buffer, create it with a shadow buffer. Make
sure you use HBL READ ONLY when locking for reading because it will avoid the
upload normally associated with unlocking the buffer. You can also combine this with
either of the 2 previous points, obviously try for static if you can - remember that the
WRITE ONLY’ part refers to the hardware buffer so can be safely used with a shadow
buffer you read from.
4. Split your vertex buffers up if you find that your usage patterns for different elements
of the vertex are different. No point having one huge updateable buffer with all the
vertex data in it, if all you need to update is the texture coordinates. Split that part
out into it’s own buffer and make the rest HBU STATIC WRITE ONLY.

5.6 Hardware Vertex Buffers


This section covers specialised hardware buffers which contain vertex data. For a general
discussion of hardware buffers, along with the rules for creating and locking them, see the
Chapter 5 [Hardware Buffers], page 160 section.
Chapter 5: Hardware Buffers 164

5.6.1 The VertexData class


The VertexData class collects together all the vertex-related information used to render
geometry. The new RenderOperation requires a pointer to a VertexData object, and it is
also used in Mesh and SubMesh to store the vertex positions, normals, texture coordinates
etc. VertexData can either be used alone (in order to render unindexed geometry, where
the stream of vertices defines the triangles), or in combination with IndexData where the
triangles are defined by indexes which refer to the entries in VertexData.

It’s worth noting that you don’t necessarily have to use VertexData to store your applica-
tions geometry; all that is required is that you can build a VertexData structure when it
comes to rendering. This is pretty easy since all of VertexData’s members are pointers, so
you could maintain your vertex buffers and declarations in alternative structures if you like,
so long as you can convert them for rendering.

The VertexData class has a number of important members:


vertexStart
The position in the bound buffers to start reading vertex data from. This allows
you to use a single buffer for many different renderables.
vertexCount
The number of vertices to process in this particular rendering group
vertexDeclaration
A pointer to a VertexDeclaration object which defines the format of the vertex
input; note this is created for you by VertexData. See Section 5.6.2 [Vertex
Declarations], page 164
vertexBufferBinding
A pointer to a VertexBufferBinding object which defines which vertex buffers
are bound to which sources - again, this is created for you by VertexData. See
Section 5.6.3 [Vertex Buffer Bindings], page 166

5.6.2 Vertex Declarations


Vertex declarations define the vertex inputs used to render the geometry you want to appear
on the screen. Basically this means that for each vertex, you want to feed a certain set of
data into the graphics pipeling, which (you hope) will affect how it all looks when the
triangles are drawn. Vertex declarations let you pull items of data (which we call vertex
elements, represented by the VertexElement class) from any number of buffers, both shared
and dedicated to that particular element. It’s your job to ensure that the contents of the
buffers make sense when interpreted in the way that your VertexDeclaration indicates that
they should.

To add an element to a VertexDeclaration, you call it’s addElement method. The parameters
to this method are:
source This tells the declaration which buffer the element is to be pulled from. Note
that this is just an index, which may range from 0 to one less than the number
of buffers which are being bound as sources of vertex data. See Section 5.6.3
Chapter 5: Hardware Buffers 165

[Vertex Buffer Bindings], page 166 for information on how a real buffer is bound
to a source index. Storing the source of the vertex element this way (rather
than using a buffer pointer) allows you to rebind the source of a vertex very
easily, without changing the declaration of the vertex format itself.
offset Tells the declaration how far in bytes the element is offset from the start of
each whole vertex in this buffer. This will be 0 if this is the only element being
sourced from this buffer, but if other elements are there then it may be higher.
A good way of thinking of this is the size of all vertex elements which precede
this element in the buffer.
type This defines the data type of the vertex input, including it’s size. This is
an important element because as GPUs become more advanced, we can no
longer assume that position input will always require 3 floating point numbers,
because programmable vertex pipelines allow full control over the inputs and
outuputs. This part of the element definition covers the basic type and size,
e.g. VET FLOAT3 is 3 floating point numbers - the meaning of the data is
dealt with in the next paramter.
semantic This defines the meaning of the element - the GPU will use this to determine
what to use this input for, and programmable vertex pipelines will use this to
identify which semantic to map the input to. This can identify the element
as positional data, normal data, texture coordinate data, etc. See the API
reference for full details of all the options.
index This parameter is only required when you supply more than one element of
the same semantic in one vertex declaration. For example, if you supply more
than one set of texture coordinates, you would set first sets index to 0, and the
second set to 1.
You can repeat the call to addElement for as many elements as you have in your vertex
input structures. There are also useful methods on VertexDeclaration for locating elements
within a declaration - see the API reference for full details.

Important Considerations
Whilst in theory you have completely full reign over the format of you vertices, in reality
there are some restrictions. Older DirectX hardware imposes a fixed ordering on the ele-
ments which are pulled from each buffer; specifically any hardware prior to DirectX 9 may
impose the following restrictions:
• VertexElements should be added in the following order, and the order of the elements
within any shared buffer should be as follows:
1. Positions
2. Blending weights
3. Normals
4. Diffuse colours
5. Specular colours
6. Texture coordinates (starting at 0, listed in order, with no gaps)
Chapter 5: Hardware Buffers 166

• You must not have unused gaps in your buffers which are not referenced by any Ver-
texElement
• You must not cause the buffer & offset settings of 2 VertexElements to overlap
OpenGL and DirectX 9 compatible hardware are not required to follow these strict
limitations, so you might find, for example that if you broke these rules your application
would run under OpenGL and under DirectX on recent cards, but it is not guaranteed to
run on older hardware under DirectX unless you stick to the above rules. For this reason
you’re advised to abide by them!

5.6.3 Vertex Buffer Bindings


Vertex buffer bindings are about associating a vertex buffer with a source index used in
Section 5.6.2 [Vertex Declarations], page 164.

Creating the Vertex Buffer


Firstly, lets look at how you create a vertex buffer:
HardwareVertexBufferSharedPtr vbuf =
HardwareBufferManager::getSingleton().createVertexBuffer(
3*sizeof(Real), // size of one whole vertex
numVertices, // number of vertices
HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
false); // no shadow buffer
Notice that we use Section 5.1 [The Hardware Buffer Manager], page 160 to create our
vertex buffer, and that a class called HardwareVertexBufferSharedPtr is returned from the
method, rather than a raw pointer. This is because vertex buffers are reference counted -
you are able to use a single vertex buffer as a source for multiple pieces of geometry therefore
a standard pointer would not be good enough, because you would not know when all the
different users of it had finished with it. The HardwareVertexBufferSharedPtr class manages
its own destruction by keeping a reference count of the number of times it is being used -
when the last HardwareVertexBufferSharedPtr is destroyed, the buffer itself automatically
destroys itself.

The parameters to the creation of a vertex buffer are as follows:


vertexSize The size in bytes of a whole vertex in this buffer. A vertex may include multiple
elements, and in fact the contents of the vertex data may be reinterpreted by
different vertex declarations if you wish. Therefore you must tell the buffer
manager how large a whole vertex is, but not the internal format of the vertex,
since that is down to the declaration to interpret. In the above example, the
size is set to the size of 3 floating point values - this would be enough to hold
a standard 3D position or normal, or a 3D texture coordinate, per vertex.
numVertices
The number of vertices in this buffer. Remember, not all the vertices have to
be used at once - it can be beneficial to create large buffers which are shared
between many chunks of geometry because changing vertex buffer bindings is
a render state switch, and those are best minimised.
Chapter 5: Hardware Buffers 167

usage This tells the system how you intend to use the buffer. See Section 5.2 [Buffer
Usage], page 160
useShadowBuffer
Tells the system whether you want this buffer backed by a system-memory copy.
See Section 5.3 [Shadow Buffers], page 161

Binding the Vertex Buffer


The second part of the process is to bind this buffer which you have created to a source
index. To do this, you call:
vertexBufferBinding->setBinding(0, vbuf);
This results in the vertex buffer you created earlier being bound to source index 0, so
any vertex element which is pulling its data from source index 0 will retrieve data from this
buffer.

There are also methods for retrieving buffers from the binding data - see the API reference
for full details.

5.6.4 Updating Vertex Buffers


The complexity of updating a vertex buffer entirely depends on how its contents are laid
out. You can lock a buffer (See Section 5.4 [Locking buffers], page 162), but how you write
data into it vert much depends on what it contains.

Lets start with a vert simple example. Lets say you have a buffer which only contains vertex
positions, so it only contains sets of 3 floating point numbers per vertex. In this case, all
you need to do to write data into it is:
Real* pReal = static_cast<Real*>(vbuf->lock(HardwareBuffer::HBL_DISCARD));
... then you just write positions in chunks of 3 reals. If you have other floating point
data in there, it’s a little more complex but the principle is largely the same, you just need
to write alternate elements. But what if you have elements of different types, or you need
to derive how to write the vertex data from the elements themselves? Well, there are some
useful methods on the VertexElement class to help you out.

Firstly, you lock the buffer but assign the result to a unsigned char* rather than
a specific type. Then, for each element whcih is sourcing from this buffer (which
you can find out by calling VertexDeclaration::findElementsBySource) you call
VertexElement::baseVertexPointerToElement. This offsets a pointer which points at the
base of a vertex in a buffer to the beginning of the element in question, and allows you to
use a pointer of the right type to boot. Here’s a full example:
// Get base pointer
unsigned char* pVert = static_cast<unsigned char*>(vbuf->lock(HardwareBuffer::HBL_READ_ONLY)
Real* pReal;
for (size_t v = 0; v < vertexCount; ++v)
{
// Get elements
VertexDeclaration::VertexElementList elems = decl->findElementsBySource(bufferIdx);
Chapter 5: Hardware Buffers 168

VertexDeclaration::VertexElementList::iterator i, iend;
for (i = elems.begin(); i != elems.end(); ++i)
{
VertexElement& elem = *i;
if (elem.getSemantic() == VES_POSITION)
{
elem.baseVertexPointerToElement(pVert, &pReal);
// write position using pReal

...

}
pVert += vbuf->getVertexSize();
}
vbuf->unlock();
See the API docs for full details of all the helper methods on VertexDeclaration and
VertexElement to assist you in manipulating vertex buffer data pointers.

5.7 Hardware Index Buffers


Index buffers are used to render geometry by building triangles out of vertices indirectly by
reference to their position in the buffer, rather than just building triangles by sequentially
reading vertices. Index buffers are simpler than vertex buffers, since they are just a list of
indexes at the end of the day, howeverthey can be held on the hardware and shared between
multiple pieces of geometry in the same way vertex buffers can, so the rules on creation and
locking are the same. See Chapter 5 [Hardware Buffers], page 160 for information.

5.7.1 The IndexData class


This class summarises the information required to use a set of indexes to render geometry.
It’s members are as follows:
indexStart The first index used by this piece of geometry; this can be useful for sharing a
single index buffer among several geometry pieces.
indexCount
The number of indexes used by this particular renderable.
indexBuffer
The index buffer which is used to source the indexes.

Creating an Index Buffer


Index buffers are created using See Section 5.1 [The Hardware Buffer Manager], page 160
just like vertex buffers, here’s how:
HardwareIndexBufferSharedPtr ibuf = HardwareBufferManager::getSingleton().
createIndexBuffer(
Chapter 5: Hardware Buffers 169

HardwareIndexBuffer::IT_16BIT, // type of index


numIndexes, // number of indexes
HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
false); // no shadow buffer
Once again, notice that the return type is a class rather than a pointer; this is reference
counted so that the buffer is automatically destroyed when no more references are made to
it. The parameters to the index buffer creation are:
indexType There are 2 types of index; 16-bit and 32-bit. They both perform the same way,
except that the latter can address larger vertex buffers. If your buffer includes
more than 65526 vertices, then you will need to use 32-bit indexes. Note that
you should only use 32-bit indexes when you need to, since they incur more
overhead than 16-bit vertices, and are not supported on some older hardware.
numIndexes
The number of indexes in the buffer. As with vertex buffers, you should consider
whether you can use a shared index buffer which is used by multiple pieces of
geometry, since there can be performance advantages to switching index buffers
less often.
usage This tells the system how you intend to use the buffer. See Section 5.2 [Buffer
Usage], page 160
useShadowBuffer
Tells the system whether you want this buffer backed by a system-memory copy.
See Section 5.3 [Shadow Buffers], page 161

5.7.2 Updating Index Buffers


Updating index buffers can only be done when you lock the buffer for writing; See Section 5.4
[Locking buffers], page 162 for details. Locking returns a void pointer, which must be cast to
the apropriate type; with index buffers this is either an unsigned short (for 16-bit indexes)
or an unsigned long (for 32-bit indexes). For example:
unsigned short* pIdx = static_cast<unsigned short*>(ibuf->lock(HardwareBuffer::HBL_DISCARD))
You can then write to the buffer using the usual pointer semantics, just remember to
unlock the buffer when you’re finished!

5.8 Hardware Pixel Buffers


Hardware Pixel Buffers are a special kind of buffer that stores graphical data in graphics
card memory, generally for use as textures. Pixel buffers can represent a one dimensional,
two dimensional or three dimensional image. A texture can consist of a multiple of these
buffers.
In contrary to vertex and index buffers, pixel buffers are not constructed directly. When
creating a texture, the necessary pixel buffers to hold its data are constructed automatically.

5.8.1 Textures
A texture is an image that can be applied onto the surface of a three dimensional model.
In Ogre, textures are represented by the Texture resource class.
Chapter 5: Hardware Buffers 170

Creating a texture
Textures are created through the TextureManager. In most cases they are created from
image files directly by the Ogre resource system. If you are reading this, you most probably
want to create a texture manually so that you can provide it with image data yourself. This
is done through TextureManager::createManual:
ptex = TextureManager::getSingleton().createManual(
"MyManualTexture", // Name of texture
"General", // Name of resource group in which the texture should be created
TEX_TYPE_2D, // Texture type
256, // Width
256, // Height
1, // Depth (Must be 1 for two dimensional textures)
0, // Number of mipmaps
PF_A8R8G8B8, // Pixel format
TU_DYNAMIC_WRITE_ONLY // usage
);
This example creates a texture named MyManualTexture in resource group General. It
is a square two dimensional texture, with width 256 and height 256. It has no mipmaps,
internal format PF A8R8G8B8 and usage TU DYNAMIC WRITE ONLY.
The different texture types will be discussed in Section 5.8.3 [Texture Types], page 172.
Pixel formats are summarised in Section 5.8.4 [Pixel Formats], page 173.

Texture usages
In addition to the hardware buffer usages as described in See Section 5.2 [Buffer Usage],
page 160 there are some usage flags specific to textures:
TU AUTOMIPMAP
Mipmaps for this texture will be automatically generated by the graphics hard-
ware. The exact algorithm used is not defined, but you can assume it to be a
2x2 box filter.
TU RENDERTARGET
This texture will be a render target, ie. used as a target for render to texture.
Setting this flag will ignore all other texture usages except TU AUTOMIPMAP.
TU DEFAULT
This is actualy a combination of usage flags, and is equivalent to
TU AUTOMIPMAP | TU STATIC WRITE ONLY. The resource system
uses these flags for textures that are loaded from images.

Getting a PixelBuffer
A Texture can consist of multiple PixelBuffers, one for each combo if mipmap level and face
number. To get a PixelBuffer from a Texture object the method Texture::getBuffer(face,
mipmap) is used:
face should be zero for non-cubemap textures. For cubemap textures it identifies the
face to use, which is one of the cube faces described in See Section 5.8.3 [Texture Types],
page 172.
Chapter 5: Hardware Buffers 171

mipmap is zero for the zeroth mipmap level, one for the first mipmap level, and so on.
On textures that have automatic mipmap generation (TU AUTOMIPMAP) only level 0
should be accessed, the rest will be taken care of by the rendering API.
A simple example of using getBuffer is
// Get the PixelBuffer for face 0, mipmap 0.
HardwarePixelBufferSharedPtr ptr = tex->getBuffer(0,0);

5.8.2 Updating Pixel Buffers


Pixel Buffers can be updated in two different ways; a simple, convient way and a more
difficult (but in some cases faster) method. Both methods make use of PixelBox objects
(See Section 5.8.5 [Pixel boxes], page 174) to represent image data in memory.

blitFromMemory
The easy method to get an image into a PixelBuffer is by using HardwarePixel-
Buffer::blitFromMemory. This takes a PixelBox object and does all necessary pixel format
conversion and scaling for you. For example, to create a manual texture and load an image
into it, all you have to do is
// Manually loads an image and puts the contents in a manually created texture
Image img;
img.load("elephant.png", "General");
// Create RGB texture with 5 mipmaps
TexturePtr tex = TextureManager::getSingleton().createManual(
"elephant",
"General",
TEX_TYPE_2D,
img.getWidth(), img.getHeight(),
5, PF_X8R8G8B8);
// Copy face 0 mipmap 0 of the image to face 0 mipmap 0 of the texture.
tex->getBuffer(0,0)->blitFromMemory(img.getPixelBox(0,0));

Direct memory locking


A more advanced method to transfer image data from and to a PixelBuffer is to use locking.
By locking a PixelBuffer you can directly access its contents in whatever the internal format
of the buffer inside the GPU is.
/// Lock the buffer so we can write to it
buffer->lock(HardwareBuffer::HBL_DISCARD);
const PixelBox &pb = buffer->getCurrentLock();

/// Update the contents of pb here


/// Image data starts at pb.data and has format pb.format
/// Here we assume data.format is PF_X8R8G8B8 so we can address pixels as uint32.
uint32 *data = static_cast<uint32*>(pb.data);
size_t height = pb.getHeight();
size_t width = pb.getWidth();
size_t pitch = pb.rowPitch; // Skip between rows of image
for(size_t y=0; y<height; ++y)
Chapter 5: Hardware Buffers 172

{
for(size_t x=0; x<width; ++x)
{
// 0xRRGGBB -> fill the buffer with yellow pixels
data[pitch*y + x] = 0x00FFFF00;
}
}

/// Unlock the buffer again (frees it for use by the GPU)
buffer->unlock();

5.8.3 Texture Types


There are four types of textures supported by current hardware, three of them only differ
in the amount of dimensions they have (one, two or three). The fourth one is special. The
different texture types are:
TEX TYPE 1D
One dimensional texture, used in combination with 1D texture coordinates.
TEX TYPE 2D
Two dimensional texture, used in combination with 2D texture coordinates.
TEX TYPE 3D
Three dimensional volume texture, used in combination with 3D texture coor-
dinates.
TEX TYPE CUBE MAP
Cube map (six two dimensional textures, one for each cube face), used in com-
bination with 3D texture coordinates.

Cube map textures


The cube map texture type (TEX TYPE CUBE MAP) is a different beast from the others;
a cube map texture represents a series of six two dimensional images addressed by 3D texture
coordinates.
+X (face 0)
Represents the positive x plane (right).
-X (face 1)
Represents the negative x plane (left).
+Y (face 2)
Represents the positive y plane (top).
-Y (face 3)
Represents the negative y plane (bottom).
+Z (face 4)
Represents the positive z plane (front).
-Z (face 5) Represents the negative z plane (back).
Chapter 5: Hardware Buffers 173

5.8.4 Pixel Formats


A pixel format described the storage format of pixel data. It defines the way pixels are
encoded in memory. The following classes of pixel formats (PF *) are defined:
Native endian formats (PF A8R8G8B8 and other formats with bit counts)
These are native endian (16, 24 and 32 bit) integers in memory. This means
that an image with format PF A8R8G8B8 can be seen as an array of 32 bit
integers, defined as 0xAARRGGBB in hexadecimal. The meaning of the letters
is described below.
Byte formats (PF BYTE *)
These formats have one byte per channel, and their channels in memory are
organized in the order they are specified in the format name. For example,
PF BYTE RGBA consists of blocks of four bytes, one for red, one for green,
one for blue, one for alpha.
Short formats (PF SHORT *)
These formats have one unsigned short (16 bit integer) per channel, and their
channels in memory are organized in the order they are specified in the for-
mat name. For example, PF SHORT RGBA consists of blocks of four 16 bit
integers, one for red, one for green, one for blue, one for alpha.
Float16 formats (PF FLOAT16 *)
These formats have one 16 bit floating point number per channel, and their
channels in memory are organized in the order they are specified in the format
name. For example, PF FLOAT16 RGBA consists of blocks of four 16 bit
floats, one for red, one for green, one for blue, one for alpha. The 16 bit floats,
also called half float) are very similar to the IEEE single-precision floating-point
standard of the 32 bits floats, except that they have only 5 exponent bits and
10 mantissa. Note that there is no standard C++ data type or CPU support
to work with these efficiently, but GPUs can calculate with these much more
efficiently than with 32 bit floats.
Float32 formats (PF FLOAT32 *)
These formats have one 32 bit floating point number per channel, and their
channels in memory are organized in the order they are specified in the format
name. For example, PF FLOAT32 RGBA consists of blocks of four 32 bit
floats, one for red, one for green, one for blue, one for alpha. The C++ data
type for these 32 bits floats is just "float".
Compressed formats (PF DXT[1-5])
S3TC compressed texture formats, a good description can be found at |
Wikipedia (http://en.wikipedia.org/wiki/S3TC)

Colour channels
The meaning of the channels R,G,B,A,L and X is defined as
R Red colour component, usually ranging from 0.0 (no red) to 1.0 (full red).
G Green colour component, usually ranging from 0.0 (no green) to 1.0 (full green).
B Blue colour component, usually ranging from 0.0 (no blue) to 1.0 (full blue).
Chapter 5: Hardware Buffers 174

A Alpha component, usually ranging from 0.0 (entire transparent) to 1.0 (opaque).
L Luminance component, usually ranging from 0.0 (black) to 1.0 (white). The
luminance component is duplicated in the R, G, and B channels to achieve a
greyscale image.
X This component is completely ignored.
If none of red, green and blue components, or luminance is defined in a format, these
default to 0. For the alpha channel this is different; if no alpha is defined, it defaults to 1.

Complete list of pixel formats


This pixel formats supported by the current version of Ogre are
Byte formats
PF BYTE RGB, PF BYTE BGR, PF BYTE BGRA, PF BYTE RGBA,
PF BYTE L, PF BYTE LA, PF BYTE A
Short formats
PF SHORT RGBA
Float16 formats
PF FLOAT16 R, PF FLOAT16 RGB, PF FLOAT16 RGBA
Float32 formats
PF FLOAT32 R, PF FLOAT32 RGB, PF FLOAT32 RGBA
8 bit native endian formats
PF L8, PF A8, PF A4L4, PF R3G3B2
16 bit native endian formats
PF L16, PF R5G6B5, PF B5G6R5, PF A4R4G4B4, PF A1R5G5B5
24 bit native endian formats
PF R8G8B8, PF B8G8R8
32 bit native endian formats
PF A8R8G8B8, PF A8B8G8R8, PF B8G8R8A8, PF R8G8B8A8,
PF X8R8G8B8, PF X8B8G8R8, PF A2R10G10B10 PF A2B10G10R10
Compressed formats
PF DXT1, PF DXT2, PF DXT3, PF DXT4, PF DXT5

5.8.5 Pixel boxes


All methods in Ogre that take or return raw image data return a PixelBox object.
A PixelBox is a primitive describing a volume (3D), image (2D) or line (1D) of pixels
in CPU memory. It describes the location and data format of a region of memory used for
image data, but does not do any memory management in itself.
Inside the memory pointed to by the data member of a pixel box, pixels are stored as
a succession of "depth" slices (in Z), each containing "height" rows (Y) of "width" pixels
(X).
Dimensions that are not used must be 1. For example, a one dimensional image will
have extents (width,1,1). A two dimensional image has extents (width,height,1).
A PixelBox has the following members:
Chapter 5: Hardware Buffers 175

data The pointer to the first component of the image data in memory.
format The pixel format (See Section 5.8.4 [Pixel Formats], page 173) of the image
data.
rowPitch The number of elements between the leftmost pixel of one row and the left pixel
of the next. This value must always be equal to getWidth() (consecutive) for
compressed formats.
slicePitch The number of elements between the top left pixel of one (depth) slice and
the top left pixel of the next. Must be a multiple of rowPitch. This value
must always be equal to getWidth()*getHeight() (consecutive) for compressed
formats.
left, top, right, bottom, front, back
Extents of the box in three dimensional integer space. Note that the left, top,
and front edges are included but the right, bottom and top ones are not. left
must always be smaller or equal to right, top must always be smaller or equal
to bottom, and front must always be smaller or equal to back.
It also has some useful methods:
getWidth()
Get the width of this box
getHeight()
Get the height of this box. This is 1 for one dimensional images.
getDepth()
Get the depth of this box. This is 1 for one and two dimensional images.
setConsecutive()
Set the rowPitch and slicePitch so that the buffer is laid out consecutive in
memory.
getRowSkip()
Get the number of elements between one past the rightmost pixel of one row
and the leftmost pixel of the next row. This is zero if rows are consecutive.
getSliceSkip()
Get the number of elements between one past the right bottom pixel of one slice
and the left top pixel of the next slice. This is zero if slices are consecutive.
isConsecutive()
Return whether this buffer is laid out consecutive in memory (ie the pitches are
equal to the dimensions)
getConsecutiveSize()
Return the size (in bytes) this image would take if it was laid out consecutive
in memory
getSubVolume(const Box &def)
Return a subvolume of this PixelBox, as a PixelBox.
For more information about these methods consult the API documentation.
Chapter 6: External Texture Sources 176

6 External Texture Sources

Introduction
This tutorial will provide a brief introduction of ExternalTextureSource and ExternalTex-
tureSourceManager classes, their relationship, and how the PlugIns work. For those inter-
ested in developing a Texture Source Plugin or maybe just wanting to know more about
this system, take a look the ffmpegVideoSystem plugin, which you can find more about on
the OGRE forums.

What Is An External Texture Source?


What is a texture source? Well, a texture source could be anything - png, bmp, jpeg,
etc. However, loading textures from traditional bitmap files is already handled by another
part OGRE. There are, however, other types of sources to get texture data from - i.e.
mpeg/avi/etc movie files, flash, run-time generated source, user defined, etc.

How do external texture source plugins benefit OGRE? Well, the main answer is: adding
support for any type of texture source does not require changing OGRE to support it...
all that is involved is writing a new plugin. Additionally, because the manager uses the
StringInterface class to issue commands/params, no change to the material script reader
is needs to be made. As a result, if a plugin needs a special parameter set, it just creates
a new command in it’s Parameter Dictionary. - see ffmpegVideoSystem plugin for an
example. To make this work, two classes have been added to OGRE: ExternalTextureSource
& ExternalTextureSourceManager.

ExternalTextureSource Class
The ExternalTextureSource class is the base class that Texture Source PlugIns must be de-
rived from. It provides a generic framework (via StringInterface class) with a very limited
amount of functionality. The most common of parameters can be set through the Tex-
turePlugInSource class interface or via the StringInterface commands contained within this
class. While this may seem like duplication of code, it is not. By using the string command
interface, it becomes extremely easy for derived plugins to add any new types of parameters
that it may need.

Default Command Parameters defined in ExternalTextureSource base class are:


• Parameter Name: "filename" Argument Type: Ogre::String Sets a filename plugin will
read from
• Parameter Name: "play mode" Argument Type: Ogre::String Sets initial play mode
to be used by the plugin - "play", "loop", "pause"
• Parameter Name: "set T P S" Argument Type: Ogre::String Used to set the tech-
nique, pass, and texture unit level to apply this texture to. As an example: To set a
technique level of 1, a pass level of 2, and a texture unit level of 3, send this string "1
2 3".
Chapter 6: External Texture Sources 177

• Parameter Name: "frames per second" Argument Type: Ogre::String Set a Frames
per second update speed. (Integer Values only)

ExternalTextureSourceManager Class
ExternalTextureSourceManager is responsible for keeping track of loaded Texture Source
PlugIns. It also aids in the creation of texture source textures from scripts. It also is the
interface you should use when dealing with texture source plugins.

Note: The function prototypes shown below are mockups - param names are simplified
to better illustrate purpose here... Steps needed to create a new texture via ExternalTex-
tureSourceManager:
• Obviously, the first step is to have the desired plugin included in plugin.cfg for it to be
loaded.
• Set the desired PlugIn as Active via AdvancedTextureManager::getSingleton().SetCurrentPlugIn(
String Type ); – type is whatever the plugin registers as handling (e.g. "video",
"flash", "whatever", etc).
• Note: Consult Desired PlugIn to see what params it needs/expects. Set
params/value pairs via AdvancedTextureManager::getSingleton().getCurrentPlugIn()-
>setParameter( String Param, String Value );
• After required params are set, a simple call to AdvancedTextureManager::getSingleton().getCurrentPlugIn(
>createDefinedTexture( sMaterialName ); will create a texture to the material name
given.

The manager also provides a method for deleting a texture source material: Advanced-
TextureManager::DestroyAdvancedTexture( String sTextureName ); The destroy method
works by broadcasting the material name to all loaded TextureSourcePlugIns, and the Plu-
gIn who actually created the material is responsible for the deletion, while other PlugIns
will just ignore the request. What this means is that you do not need to worry about
which PlugIn created the material, or activating the PlugIn yourself. Just call the manager
method to remove the material. Also, all texture plugins should handle cleanup when they
are shutdown.

Texture Source Material Script


As mentioned earlier, the process of defining/creating texture sources can be done within
material script file. Here is an example of a material script definition - Note: This example
is based off the ffmpegVideoSystem plugin parameters.
material Example/MyVideoExample
{
technique
{
pass
Chapter 6: External Texture Sources 178

{
texture_unit
{
texture_source video
{
filename mymovie.mpeg
play_mode play
sound_mode on
}
}
}
}
}

Notice that the first two param/value pairs are defined in the ExternalTextureSource
base class and that the third parameter/value pair is not defined in the base class... That
parameter is added to the param dictionary by the ffmpegVideoPlugin... This shows that
extending the functionality with the plugins is extremely easy. Also, pay particular attention
to the line: texture source video. This line identifies that this texture unit will come from a
texture source plugin. It requires one parameter that determines which texture plugin will
be used. In the example shown, the plugin requested is one that registered with "video"
name.

Simplified Diagram of Process


This diagram uses ffmpegVideoPlugin as example, but all plug ins will work the same in how
they are registered/used here. Also note that TextureSource Plugins are loaded/registered
before scripts are parsed. This does not mean that they are initialized... Plugins are not
initialized until they are set active! This is to ensure that a rendersystem is setup before
the plugins might make a call the rendersystem.
Chapter 6: External Texture Sources 179
Chapter 7: Shadows 180

7 Shadows
Shadows are clearly an important part of rendering a believable scene - they provide a more
tangible feel to the objects in the scene, and aid the viewer in understanding the spatial
relationship between objects. Unfortunately, shadows are also one of the most challenging
aspects of 3D rendering, and they are still very much an active area of research. Whilst there
are many techniques to render shadows, none is perfect and they all come with advantages
and disadvantages. For this reason, Ogre provides multiple shadow implementations, with
plenty of configuration settings, so you can choose which technique is most appropriate for
your scene.

Shadow implementations fall into basically 2 broad categories: Section 7.1 [Stencil Shad-
ows], page 181 and Section 7.2 [Texture-based Shadows], page 185. This describes the
method by which the shape of the shadow is generated. In addition, there is more than one
way to render the shadow into the scene: Section 7.3 [Modulative Shadows], page 190, which
darkens the scene in areas of shadow, and Section 7.4 [Additive Light Masking], page 191
which by contrast builds up light contribution in areas which are not in shadow. You also
have the option of [Integrated Texture Shadows], page 189 which gives you complete control
over texture shadow application, allowing for complex single-pass shadowing shaders. Ogre
supports all these combinations.

Enabling shadows
Shadows are disabled by default, here’s how you turn them on and configure them in the
general sense:
1. Enable a shadow technique on the SceneManager as the first thing you doing your
scene setup. It is important that this is done first because the shadow technique can
alter the way meshes are loaded. Here’s an example:
mSceneMgr->setShadowTechnique(SHADOWTYPE_STENCIL_ADDITIVE);
2. Create one or more lights. Note that not all light types are necessarily supported by
all shadow techniques, you should check the sections about each technique to check.
Note that if certain lights should not cast shadows, you can turn that off by calling
setCastShadows(false) on the light, the default is true.
3. Disable shadow casting on objects which should not cast shadows. Call setCastShad-
ows(false) on objects you don’t want to cast shadows, the default for all objects is to
cast shadows.
4. Configure shadow far distance. You can limit the distance at which shadows are con-
sidered for performance reasons, by calling SceneManager::setShadowFarDistance.
5. Turn off the receipt of shadows on materials that should not receive them. You can
turn off the receipt of shadows (note, not the casting of shadows - that is done per-
object) by calling Material::setReceiveShadows or using the receive shadows material
attribute. This is useful for materials which should be considered self-illuminated for
example. Note that transparent materials are typically excluded from receiving and
Chapter 7: Shadows 181

casting shadows, although see the [transparency casts shadows], page 20 option for
exceptions.

Opting out of shadows


By default Ogre treats all non-transparent objects as shadow casters and receivers (depend-
ing on the shadow technique they may not be able to be both at once, check the docs for
your chosen technique first). You can disable shadows in various ways:
Turning off shadow casting on the light
Calling Light::setCastsShadows(false) will mean this light casts no shadows at
all.
Turn off shadow receipt on a material
Calling Material::setReceiveShadows(false) will prevent any objects using this
material from receiving shadows.
Turn off shadow casting on individual objects
Calling MovableObject::setCastsShadows(false) will disable shadow casting for
this object.
Turn off shadows on an entire rendering queue group
Calling RenderQueueGroup::setShadowsEnabled(false) will turn off both
shadow casting and receiving on an entire rendering queue group. This is
useful because Ogre has to do light setup tasks per group in order to preserve
the inter-group ordering. Ogre automatically disables shadows on a number
of groups automatically, such as RENDER QUEUE BACKGROUND,
RENDER QUEUE OVERLAY, RENDER QUEUE SKIES EARLY and
RENDER QUEUE SKIES LATE. If you choose to use more rendering queues
(and by default, you won’t be using any more than this plus the ’standard’
queue, so ignore this if you don’t know what it means!), be aware that each
one can incur a light setup cost, and you should disable shadows on the
additional ones you use if you can.

7.1 Stencil Shadows


Stencil shadows are a method by which a ’mask’ is created for the screen using a feature
called the stencil buffer. This mask can be used to exclude areas of the screen from sub-
sequent renders, and thus it can be used to either include or exclude areas in shadow.
They are enabled by calling SceneManager::setShadowTechnique with a parameter of ei-
ther SHADOWTYPE_STENCIL_ADDITIVE or SHADOWTYPE_STENCIL_MODULATIVE. Because the
stencil can only mask areas to be either ’enabled’ or ’disabled’, stencil shadows have ’hard’
edges, that is to say clear dividing lines between light and shadow - it is not possible to
soften these edges.

In order to generate the stencil, ’shadow volumes’ are rendered by extruding the silhou-
ette of the shadow caster away from the light. Where these shadow volumes intersect other
objects (or the caster, since self-shadowing is supported using this technique), the stencil
is updated, allowing subsequent operations to differentiate between light and shadow. How
Chapter 7: Shadows 182

exactly this is used to render the shadows depends on whether Section 7.3 [Modulative
Shadows], page 190 or Section 7.4 [Additive Light Masking], page 191 is being used. Ob-
jects can both cast and receive stencil shadows, so self-shadowing is inbuilt.

The advantage of stencil shadows is that they can do self-shadowing simply on low-
end hardware, provided you keep your poly count under control. In contrast doing self-
shadowing with texture shadows requires a fairly modern machine (See Section 7.2 [Texture-
based Shadows], page 185). For this reason, you’re likely to pick stencil shadows if you need
an accurate shadowing solution for an application aimed at older or lower-spec machines.

The disadvantages of stencil shadows are numerous though, especially on more modern
hardware. Because stencil shadows are a geometric technique, they are inherently more
costly the higher the number of polygons you use, meaning you are penalized the more
detailed you make your meshes. The fillrate cost, which comes from having to render
shadow volumes, also escalates the same way. Since more modern applications are likely to
use higher polygon counts, stencil shadows can start to become a bottleneck. In addition,
the visual aspects of stencil shadows are pretty primitive - your shadows will always be
hard-edged, and you have no possibility of doing clever things with shaders since the stencil
is not available for manipulation there. Therefore, if your application is aimed at higher-
end machines you should definitely consider switching to texture shadows (See Section 7.2
[Texture-based Shadows], page 185).
There are a number of issues to consider which are specific to stencil shadows:
• [CPU Overhead], page 182
• [Extrusion distance], page 183
• [Camera far plane positioning], page 183
• [Mesh edge lists], page 183
• [The Silhouette Edge], page 183
• [Be realistic], page 184
• [Stencil Optimisations Performed By Ogre], page 184

CPU Overhead
Calculating the shadow volume for a mesh can be expensive, and it has to be done on the
CPU, it is not a hardware accelerated feature. Therefore, you can find that if you overuse
this feature, you can create a CPU bottleneck for your application. Ogre quite aggressively
eliminates objects which cannot be casting shadows on the frustum, but there are limits
to how much it can do, and large, elongated shadows (e.g. representing a very low sun
position) are very difficult to cull efficiently. Try to avoid having too many shadow casters
around at once, and avoid long shadows if you can. Also, make use of the ’shadow far
distance’ parameter on the SceneManager, this can eliminate distant shadow casters from
the shadow volume construction and save you some time, at the expense of only having
shadows for closer objects. Lastly, make use of Ogre’s Level-Of-Detail (LOD) features; you
can generate automatically calculated LODs for your meshes in code (see the Mesh API
Chapter 7: Shadows 183

docs) or when using the mesh tools such as OgreXmlConverter and OgreMeshUpgrader.
Alternatively, you can assign your own manual LODs by providing alternative mesh files at
lower detail levels. Both methods will cause the shadow volume complexity to decrease as
the object gets further away, which saves you valuable volume calculation time.

Extrusion distance
When vertex programs are not available, Ogre can only extrude shadow volumes a finite
distance from the object. If an object gets too close to a light, any finite extrusion dis-
tance will be inadequate to guarantee all objects will be shadowed properly by this object.
Therefore, you are advised not to let shadow casters pass too close to light sources if you
can avoid it, unless you can guarantee that your target audience will have vertex program
capable hardware (in this case, Ogre extrudes the volume to infinity using a vertex program
so the problem does not occur).

When infinite extrusion is not possible, Ogre uses finite extrusion, either derived from the
attenuation range of a light (in the case of a point light or spotlight), or a fixed extrusion
distance set in the application in the case of directional lights. To change the directional
light extrusion distance, use SceneManager::setShadowDirectionalLightExtrusionDistance.

Camera far plane positioning


Stencil shadow volumes rely very much on not being clipped by the far plane. When you
enable stencil shadows, Ogre internally changes the far plane settings of your cameras such
that there is no far plane - i.e. it is placed at infinity (Camera::setFarClipDistance(0)). This
avoids artifacts caused by clipping the dark caps on shadow volumes, at the expense of a
(very) small amount of depth precision.

Mesh edge lists


Stencil shadows can only be calculated when an ’edge list’ has been built for all the geometry
in a mesh. The official exporters and tools automatically build this for you (or have an option
to do so), but if you create your own meshes, you must remember to build edge lists for
them before using them with stencil shadows - you can do that by using OgreMeshUpgrade
or OgreXmlConverter, or by calling Mesh::buildEdgeList before you export or use the mesh.
If a mesh doesn’t have edge lists, OGRE assumes that it is not supposed to cast stencil
shadows.

The Silhouette Edge


Stencil shadowing is about finding a silhouette of the mesh, and projecting it away to form a
volume. What this means is that there is a definite boundary on the shadow caster between
light and shadow; a set of edges where where the triangle on one side is facing toward the
light, and one is facing away. This produces a sharp edge around the mesh as the transition
occurs. Provided there is little or no other light in the scene, and the mesh has smooth
normals to produce a gradual light change in its underlying shading, the silhouette edge
can be hidden - this works better the higher the tessellation of the mesh. However, if the
scene includes ambient light, then the difference is far more marked. This is especially true
when using Section 7.3 [Modulative Shadows], page 190, because the light contribution of
each shadowed area is not taken into account by this simplified approach, and so using 2
Chapter 7: Shadows 184

or more lights in a scene using modulative stencil shadows is not advisable; the silhouette
edges will be very marked. Additive lights do not suffer from this as badly because each
light is masked individually, meaning that it is only ambient light which can show up the
silhouette edges.

Be realistic
Don’t expect to be able to throw any scene using any hardware at the stencil shadow
algorithm and expect to get perfect, optimum speed results. Shadows are a complex and
expensive technique, so you should impose some reasonable limitations on your placing of
lights and objects; they’re not really that restricting, but you should be aware that this is
not a complete free-for-all.
• Try to avoid letting objects pass very close (or even through) lights - it might look nice
but it’s one of the cases where artifacts can occur on machines not capable of running
vertex programs.
• Be aware that shadow volumes do not respect the ’solidity’ of the objects they pass
through, and if those objects do not themselves cast shadows (which would hide the
effect) then the result will be that you can see shadows on the other side of what should
be an occluding object.
• Make use of SceneManager::setShadowFarDistance to limit the number of shadow vol-
umes constructed
• Make use of LOD to reduce shadow volume complexity at distance
• Avoid very long (dusk and dawn) shadows - they exacerbate other issues such as volume
clipping, fillrate, and cause many more objects at a greater distance to require volume
construction.

Stencil Optimisations Performed By Ogre


Despite all that, stencil shadows can look very nice (especially with Section 7.4 [Additive
Light Masking], page 191) and can be fast if you respect the rules above. In addition, Ogre
comes pre-packed with a lot of optimisations which help to make this as quick as possible.
This section is more for developers or people interested in knowing something about the
’under the hood’ behaviour of Ogre.

Vertex program extrusion


As previously mentioned, Ogre performs the extrusion of shadow volumes in
hardware on vertex program-capable hardware (e.g. GeForce3, Radeon 8500 or
better). This has 2 major benefits; the obvious one being speed, but secondly
that vertex programs can extrude points to infinity, which the fixed-function
pipeline cannot, at least not without performing all calculations in software.
This leads to more robust volumes, and also eliminates more than half the
volume triangles on directional lights since all points are projected to a single
point at infinity.
Chapter 7: Shadows 185

Scissor test optimisation


Ogre uses a scissor rectangle to limit the effect of point / spot lights when
their range does not cover the entire viewport; that means we save fillrate when
rendering stencil volumes, especially with distant lights
Z-Pass and Z-Fail algorithms
The Z-Fail algorithm, often attributed to John Carmack, is used in Ogre to
make sure shadows are robust when the camera passes through the shadow
volume. However, the Z-Fail algorithm is more expensive than the traditional
Z-Pass; so Ogre detects when Z-Fail is required and only uses it then, Z-Pass is
used at all other times.
2-Sided stencilling and stencil wrapping
Ogre supports the 2-Sided stencilling / stencil wrapping extensions, which when
supported allow volumes to be rendered in a single pass instead of having to
do one pass for back facing tris and another for front-facing tris. This doesn’t
save fillrate, since the same number of stencil updates are done, but it does save
primitive setup and the overhead incurred in the driver every time a render call
is made.
Aggressive shadow volume culling
Ogre is pretty good at detecting which lights could be affecting the frustum, and
from that, which objects could be casting a shadow on the frustum. This means
we don’t waste time constructing shadow geometry we don’t need. Setting the
shadow far distance is another important way you can reduce stencil shadow
overhead since it culls far away shadow volumes even if they are visible, which
is beneficial in practice since you’re most interested in shadows for close-up
objects.

7.2 Texture-based Shadows


Texture shadows involve rendering shadow casters from the point of view of the light into
a texture, which is then projected onto shadow receivers. The main advantage of texture
shadows as opposed to Section 7.1 [Stencil Shadows], page 181 is that the overhead of
increasing the geometric detail is far lower, since there is no need to perform per-triangle
calculations. Most of the work in rendering texture shadows is done by the graphics card,
meaning the technique scales well when taking advantage of the latest cards, which are
at present outpacing CPUs in terms of their speed of development. In addition, texture
shadows are much more customisable - you can pull them into shaders to apply as you
like (particularly with [Integrated Texture Shadows], page 189, you can perform filtering
to create softer shadows or perform other special effects on them. Basically, most modern
engines use texture shadows as their primary shadow technique simply because they are
more powerful, and the increasing speed of GPUs is rapidly amortizing the fillrate / texture
access costs of using them.

The main disadvantage to texture shadows is that, because they are simply a texture,
they have a fixed resolution which means if stretched, the pixellation of the texture can
become obvious. There are ways to combat this though:
Chapter 7: Shadows 186

Choosing a projection basis


The simplest projection is just to render the shadow casters from the lights
perspective using a regular camera setup. This can look bad though, so there
are many other projections which can help to improve the quality from the
main camera’s perspective. OGRE supports pluggable projection bases via it’s
ShadowCameraSetup class, and comes with several existing options - Uniform
(which is the simplest), Uniform Focussed (which is still a normal camera pro-
jection, except that the camera is focussed into the area that the main viewing
camera is looking at), LiSPSM (Light Space Perspective Shadow Mapping -
which both focusses and distorts the shadow frustum based on the main view
camera) and Plan Optimal (which seeks to optimise the shadow fidelity for a
single receiver plane).
Filtering You can also sample the shadow texture multiple times rather than once to
soften the shadow edges and improve the appearance. Percentage Closest Fil-
tering (PCF) is the most popular approach, although there are multiple variants
depending on the number and pattern of the samples you take. Our shadows
demo includes a 5-tap PCF example combined with depth shadow mapping.
Using a larger texture
Again as GPUs get faster and gain more memory, you can scale up to take
advantage of this.
If you combine all 3 of these techniques you can get a very high quality shadow solution.
The other issue is with point lights. Because texture shadows require a render to texture
in the direction of the light, omnidirectional lights (point lights) would require 6 renders
to totally cover all the directions shadows might be cast. For this reason, Ogre primarily
supports directional lights and spotlights for generating texture shadows; you can use point
lights but they will only work if off-camera since they are essentially turned into a spotlight
shining into your camera frustum for the purposes of texture shadows.

Directional Lights
Directional lights in theory shadow the entire scene from an infinitely distant light. Now,
since we only have a finite texture which will look very poor quality if stretched over the
entire scene, clearly a simplification is required. Ogre places a shadow texture over the area
immediately in front of the camera, and moves it as the camera moves (although it rounds
this movement to multiples of texels so that the slight ’swimming shadow’ effect caused
by moving the texture is minimised). The range to which this shadow extends, and the
offset used to move it in front of the camera, are configurable (See [Configuring Texture
Shadows], page 187). At the far edge of the shadow, Ogre fades out the shadow based on
other configurable parameters so that the termination of the shadow is softened.

Spotlights
Spotlights are much easier to represent as renderable shadow textures than directional
lights, since they are naturally a frustum. Ogre represents spotlight directly by rendering
the shadow from the light position, in the direction of the light cone; the field-of-view of
Chapter 7: Shadows 187

the texture camera is adjusted based on the spotlight falloff angles. In addition, to hide the
fact that the shadow texture is square and has definite edges which could show up outside
the spotlight, Ogre uses a second texture unit when projecting the shadow onto the scene
which fades out the shadow gradually in a projected circle around the spotlight.

Point Lights
As mentioned above, to support point lights properly would require multiple renders (either
6 for a cubic render or perhaps 2 for a less precise parabolic mapping), so rather than do
that we approximate point lights as spotlights, where the configuration is changed on the
fly to make the light shine from its position over the whole of the viewing frustum. This is
not an ideal setup since it means it can only really work if the point light’s position is out
of view, and in addition the changing parameterisation can cause some ’swimming’ of the
texture. Generally we recommend avoiding making point lights cast texture shadows.

Shadow Casters and Shadow Receivers


To enable texture shadows, use the shadow technique SHADOWTYPE TEXTURE MODULATIVE
or SHADOWTYPE TEXTURE ADDITIVE; as the name suggests this produces
Section 7.3 [Modulative Shadows], page 190 or Section 7.4 [Additive Light Masking],
page 191 respectively. The cheapest and simplest texture shadow techniques do not
use depth information, they merely render casters to a texture and render this onto
receivers as plain colour - this means self-shadowing is not possible using these methods.
This is the default behaviour if you use the automatic, fixed-function compatible (and
thus usable on lower end hardware) texture shadow techniques. You can however use
shaders-based techniques through custom shadow materials for casters and receivers to
perform more complex shadow algorithms, such as depth shadow mapping which does
allow self-shadowing. OGRE comes with an example of this in its shadows demo, although
it’s only usable on Shader Model 2 cards or better. Whilst fixed-function depth shadow
mapping is available in OpenGL, it was never standardised in Direct3D so using shaders
in custom caster & receiver materials is the only portable way to do it. If you use this
approach, call SceneManager::setShadowTextureSelfShadow with a parameter of ’true’ to
allow texture shadow casters to also be receivers.

If you’re not using depth shadow mapping, OGRE divides shadow casters and receivers into
2 disjoint groups. Simply by turning off shadow casting on an object, you automatically
make it a shadow receiver (although this can be disabled by setting the ’receive shadows’
option to ’false’ in a material script. Similarly, if an object is set as a shadow caster, it
cannot receive shadows.

Configuring Texture Shadows


There are a number of settings which will help you configure your texture-based shadows
so that they match your requirements.
Chapter 7: Shadows 188

bullet [Maximum number of shadow textures], page 188


bullet [Shadow texture size], page 188
bullet [Shadow far distance], page 188
bullet [Shadow texture offset (Directional Lights)], page 189
bullet [Shadow fade settings], page 189
bullet [Custom shadow camera setups], page 189
bullet [Integrated Texture Shadows], page 189

Maximum number of shadow textures


Shadow textures take up texture memory, and to avoid stalling the rendering pipeline Ogre
does not reuse the same shadow texture for multiple lights within the same frame. This
means that each light which is to cast shadows must have its own shadow texture. In
practice, if you have a lot of lights in your scene you would not wish to incur that sort of
texture overhead.

You can adjust this manually by simply turning off shadow casting for lights you do not
wish to cast shadows. In addition, you can set a maximum limit on the number of shadow
textures Ogre is allowed to use by calling SceneManager::setShadowTextureCount. Each
frame, Ogre determines the lights which could be affecting the frustum, and then allocates
the number of shadow textures it is allowed to use to the lights on a first-come-first-served
basis. Any additional lights will not cast shadows that frame.

Note that you can set the number of shadow textures and their size at the same time by
using the SceneManager::setShadowTextureSettings method; this is useful because both the
individual calls require the potential creation / destruction of texture resources.

Shadow texture size


The size of the textures used for rendering the shadow casters into can be altered;
clearly using larger textures will give you better quality shadows, but at the expense
of greater memory usage. Changing the texture size is done by calling SceneMan-
ager::setShadowTextureSize - textures are assumed to be square and you must specify a
texture size that is a power of 2. Be aware that each modulative shadow texture will take
size*size*3 bytes of texture memory.

Important: if you use the GL render system your shadow texture size can only be larger
(in either dimension) than the size of your primary window surface if the hardware
supports the Frame Buffer Object (FBO) or Pixel Buffer Object (PBO) extensions. Most
modern cards support this now, but be careful of older cards - you can check the ability of
the hardware to manage this through ogreRoot->getRenderSystem()->getCapabilities()-
>hasCapability(RSC HWRENDER TO TEXTURE). If this returns false, if you create a
shadow texture larger in any dimension than the primary surface, the rest of the shadow
texture will be blank.

Shadow far distance


This determines the distance at which shadows are terminated; it also determines how far
into the distance the texture shadows for directional lights are stretched - by reducing this
Chapter 7: Shadows 189

value, or increasing the texture size, you can improve the quality of shadows from directional
lights at the expense of closer shadow termination or increased memory usage, respectively.

Shadow texture offset (Directional Lights)


As mentioned above in the directional lights section, the rendering of shadows for direc-
tional lights is an approximation that allows us to use a single render to cover a largish
area with shadows. This offset parameter affects how far from the camera position the
center of the shadow texture is offset, as a proportion of the shadow far distance. The
greater this value, the more of the shadow texture is ’useful’ to you since it’s ahead of the
camera, but also the further you offset it, the more chance there is of accidentally seeing
the edge of the shadow texture at more extreme angles. You change this value by calling
SceneManager::setShadowDirLightTextureOffset, the default is 0.6.

Shadow fade settings


Shadows fade out before the shadow far distance so that the termination of shadow is not
abrupt. You can configure the start and end points of this fade by calling the SceneMan-
ager::setShadowTextureFadeStart and SceneManager::setShadowTextureFadeEnd methods,
both take distances as a proportion of the shadow far distance. Because of the inaccuracies
caused by using a square texture and a radial fade distance, you cannot use 1.0 as the fade
end, if you do you’ll see artifacts at the extreme edges. The default values are 0.7 and 0.9,
which serve most purposes but you can change them if you like.

Texture shadows and vertex / fragment programs


When rendering shadow casters into a modulative shadow texture, Ogre turns off all tex-
tures, and all lighting contributions except for ambient light, which it sets to the colour of
the shadow ([Shadow Colour], page 191). For additive shadows, it render the casters into a
black & white texture instead. This is enough to render shadow casters for fixed-function
material techniques, however where a vertex program is used Ogre doesn’t have so much
control. If you use a vertex program in the first pass of your technique, then you must also
tell ogre which vertex program you want it to use when rendering the shadow caster; see
[Shadows and Vertex Programs], page 93 for full details.

Custom shadow camera setups


As previously mentioned, one of the downsides of texture shadows is that the texture resolu-
tion is finite, and it’s possible to get aliasing when the size of the shadow texel is larger than
a screen pixel, due to the projection of the texture. In order to address this, you can spec-
ify alternative projection bases by using or creating subclasses of the ShadowCameraSetup
class. The default version is called DefaultShadowCameraSetup and this sets up a simple
regular frustum for point and spotlights, and an orthographic frustum for directional lights.
There is also a PlaneOptimalShadowCameraSetup class which specialises the projection to
a plane, thus giving you much better definition provided your shadow receivers exist mostly
in a single plane. Other setup classes (e.g. you might create a perspective or trapezoid
shadow mapping version) can be created and plugged in at runtime, either on individual
lights or on the SceneManager as a whole.
Chapter 3: Scripts 190

Integrated Texture Shadows


Texture shadows have one major advantage over stencil shadows - the data used
to represent them can be referenced in regular shaders. Whilst the default texture
shadow modes (SHADOWTYPE TEXTURE MODULATIVE and SHADOW-
TYPE TEXTURE ADDITIVE) automatically render shadows for you, their disadvantage
is that because they are generalised add-ons to your own materials, they tend to take more
passes of the scene to use. In addition, you don’t have a lot of control over the composition
of the shadows.

Here is where ’integrated’ texture shadows step in. Both of the texture shadow types
above have alternative versions called SHADOWTYPE TEXTURE MODULATIVE INTEGRATED
and SHADOWTYPE TEXTURE ADDITIVE INTEGRATED, where instead of rendering
the shadows for you, it just creates the texture shadow and then expects you to use that
shadow texture as you see fit when rendering receiver objects in the scene. The downside
is that you have to take into account shadow receipt in every one of your materials if you
use this option - the upside is that you have total control over how the shadow textures
are used. The big advantage here is that you can can perform more complex shading,
taking into account shadowing, than is possible using the generalised bolt-on approaches,
AND you can probably write them in a smaller number of passes, since you know precisely
what you need and can combine passes where possible. When you use one of these
shadowing approaches, the only difference between additive and modulative is the colour
of the casters in the shadow texture (the shadow colour for modulative, black for additive)
- the actual calculation of how the texture affects the receivers is of course up to you.
No separate modulative pass will be performed, and no splitting of your materials into
ambient / per-light / decal etc will occur - absolutely everything is determined by your
original material (which may have modulative passes or per-light iteration if you want of
course, but it’s not required).

You reference a shadow texture in a material which implements this approach by using
the ’[content type], page 51 shadow’ directive in your {texture unit. It implicitly references
a shadow texture based on the number of times you’ve used this directive in the same pass,
and the light start option or light-based pass iteration, which might start the light index
higher than 0.

7.3 Modulative Shadows


Modulative shadows work by darkening an already rendered scene with a fixed colour. First,
the scene is rendered normally containing all the objects which will be shadowed, then a
modulative pass is done per light, which darkens areas in shadow. Finally, objects which
do not receive shadows are rendered.

There are 2 modulative shadow techniques; stencil-based (See Section 7.1


[Stencil Shadows], page 181 : SHADOWTYPE STENCIL MODULATIVE) and
Chapter 3: Scripts 191

texture-based (See Section 7.2 [Texture-based Shadows], page 185 : SHADOW-


TYPE TEXTURE MODULATIVE). Modulative shadows are an inaccurate lighting
model, since they darken the areas of shadow uniformly, irrespective of the amount of
light which would have fallen on the shadow area anyway. However, they can give fairly
attractive results for a much lower overhead than more ’correct’ methods like Section 7.4
[Additive Light Masking], page 191, and they also combine well with pre-baked static
lighting (such as pre-calculated lightmaps), which additive lighting does not. The main
thing to consider is that using multiple light sources can result in overly dark shadows
(where shadows overlap, which intuitively looks right in fact, but it’s not physically
correct) and artifacts when using stencil shadows (See [The Silhouette Edge], page 183).

Shadow Colour
The colour which is used to darken the areas in shadow is set by SceneMan-
ager::setShadowColour; it defaults to a dark grey (so that the underlying colour still shows
through a bit).

Note that if you’re using texture shadows you have the additional option of using
[Integrated Texture Shadows], page 189 rather than being forced to have a separate pass of
the scene to render shadows. In this case the ’modulative’ aspect of the shadow technique
just affects the colour of the shadow texture.

7.4 Additive Light Masking


Additive light masking is about rendering the scene many times, each time representing
a single light contribution whose influence is masked out in areas of shadow. Each pass
is combined with (added to) the previous one such that when all the passes are complete,
all the light contribution has correctly accumulated in the scene, and each light has been
prevented from affecting areas which it should not be able to because of shadow casters.
This is an effective technique which results in very realistic looking lighting, but it comes
at a price: more rendering passes.

As many technical papers (and game marketing) will tell you, rendering realistic lighting
like this requires multiple passes. Being a friendly sort of engine, Ogre frees you from most
of the hard work though, and will let you use the exact same material definitions whether
you use this lighting technique or not (for the most part, see [Pass Classification and Vertex
Programs], page 193). In order to do this technique, Ogre automatically categorises the
Section 3.1.2 [Passes], page 24 you define in your materials into 3 types:
1. ambient Passes categorised as ’ambient’ include any base pass which is not lit by any
particular light, i.e. it occurs even if there is no ambient light in the scene. The
ambient pass always happens first, and sets up the initial depth value of the fragments,
Chapter 3: Scripts 192

and the ambient colour if applicable. It also includes any emissive / self illumination
contribution. Only textures which affect ambient light (e.g. ambient occlusion maps)
should be rendered in this pass.
2. diffuse/specular Passes categorised as ’diffuse/specular’ (or ’per-light’) are rendered
once per light, and each pass contributes the diffuse and specular colour from that
single light as reflected by the diffuse / specular terms in the pass. Areas in shadow
from that light are masked and are thus not updated. The resulting masked colour
is added to the existing colour in the scene. Again, no textures are used in this pass
(except for textures used for lighting calculations such as normal maps).
3. decal Passes categorised as ’decal’ add the final texture colour to the scene, which is
modulated by the accumulated light built up from all the ambient and diffuse/specular
passes.
In practice, Section 3.1.2 [Passes], page 24 rarely fall nicely into just one of these cate-
gories. For each Technique, Ogre compiles a list of ’Illumination Passes’, which are derived
from the user defined passes, but can be split, to ensure that the divisions between illumi-
nation pass categories can be maintained. For example, if we take a very simple material
definition:
material TestIllumination
{
technique
{
pass
{
ambient 0.5 0.2 0.2
diffuse 1 0 0
specular 1 0.8 0.8 15
texture_unit
{
texture grass.png
}
}
}
}
Ogre will split this into 3 illumination passes, which will be the equivalent of this:
material TestIlluminationSplitIllumination
{
technique
{
// Ambient pass
pass
{
ambient 0.5 0.2 0.2
diffuse 0 0 0
specular 0 0 0
}
Chapter 3: Scripts 193

// Diffuse / specular pass


pass
{
scene_blend add
iteration once_per_light
diffuse 1 0 0
specular 1 0.8 0.8 15
}

// Decal pass
pass
{
scene_blend modulate
lighting off
texture_unit
{
texture grass.png
}
}
}
}
So as you can see, even a simple material requires a minimum of 3 passes when using
this shadow technique, and in fact it requires (num lights + 2) passes in the general sense.
You can use more passes in your original material and Ogre will cope with that too, but
be aware that each pass may turn into multiple ones if it uses more than one type of light
contribution (ambient vs diffuse/specular) and / or has texture units. The main nice thing
is that you get the full multipass lighting behaviour even if you don’t define your materials
in terms of it, meaning that your material definitions can remain the same no matter what
lighting approach you decide to use.

Manually Categorising Illumination Passes


Alternatively, if you want more direct control over the categorisation of your passes, you
can use the [illumination stage], page 35 option in your pass to explicitly assign a pass
unchanged to an illumination stage. This way you can make sure you know precisely how
your material will be rendered under additive lighting conditions.

Pass Classification and Vertex Programs


Ogre is pretty good at classifying and splitting your passes to ensure that the multipass
rendering approach required by additive lighting works correctly without you having to
change your material definitions. However, there is one exception; when you use vertex
programs, the normal lighting attributes ambient, diffuse, specular etc are not used, because
all of that is determined by the vertex program. Ogre has no way of knowing what you’re
doing inside that vertex program, so you have to tell it.
Chapter 3: Scripts 194

In practice this is very easy. Even though your vertex program could be doing a lot
of complex, highly customised processing, it can still be classified into one of the 3 types
listed above. All you need to do to tell Ogre what you’re doing is to use the pass attributes
ambient, diffuse, specular and self illumination, just as if you were not using a vertex
program. Sure, these attributes do nothing (as far as rendering is concerned) when you’re
using vertex programs, but it’s the easiest way to indicate to Ogre which light components
you’re using in your vertex program. Ogre will then classify and potentially split your
programmable pass based on this information - it will leave the vertex program as-is (so
that any split passes will respect any vertex modification that is being done).

Note that when classifying a diffuse/specular programmable pass, Ogre checks to see
whether you have indicated the pass can be run once per light (iteration once per light).
If so, the pass is left intact, including it’s vertex and fragment programs. However, if this
attribute is not included in the pass, Ogre tries to split off the per-light part, and in doing
so it will disable the fragment program, since in the absence of the ’iteration once per light’
attribute it can only assume that the fragment program is performing decal work and hence
must not be used per light.

So clearly, when you use additive light masking as a shadow technique, you need to make
sure that programmable passes you use are properly set up so that they can be classified
correctly. However, also note that the changes you have to make to ensure the classification
is correct does not affect the way the material renders when you choose not to use additive
lighting, so the principle that you should be able to use the same material definitions for
all lighting scenarios still holds. Here is an example of a programmable material which will
be classified correctly by the illumination pass classifier:
// Per-pixel normal mapping Any number of lights, diffuse and specular
material Examples/BumpMapping/MultiLightSpecular
{

technique
{
// Base ambient pass
pass
{
// ambient only, not needed for rendering, but as information
// to lighting pass categorisation routine
ambient 1 1 1
diffuse 0 0 0
specular 0 0 0 0
// Really basic vertex program
vertex_program_ref Ogre/BasicVertexPrograms/AmbientOneTexture
{
Chapter 3: Scripts 195

param_named_auto worldViewProj worldviewproj_matrix


param_named_auto ambient ambient_light_colour
}

}
// Now do the lighting pass
// NB we don’t do decal texture here because this is repeated per light
pass
{
// set ambient off, not needed for rendering, but as information
// to lighting pass categorisation routine
ambient 0 0 0
// do this for each light
iteration once_per_light

scene_blend add

// Vertex program reference


vertex_program_ref Examples/BumpMapVPSpecular
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named_auto worldViewProj worldviewproj_matrix
}

// Fragment program
fragment_program_ref Examples/BumpMapFPSpecular
{
param_named_auto lightDiffuse light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
}

// Base bump map


texture_unit
{
texture NMBumpsOut.png
colour_op replace
}
// Normalisation cube map
texture_unit
{
cubic_texture nm.png combinedUVW
tex_coord_set 1
tex_address_mode clamp
}
// Normalisation cube map #2
Chapter 7: Shadows 196

texture_unit
{
cubic_texture nm.png combinedUVW
tex_coord_set 1
tex_address_mode clamp
}
}

// Decal pass
pass
{
lighting off
// Really basic vertex program
vertex_program_ref Ogre/BasicVertexPrograms/AmbientOneTexture
{
param_named_auto worldViewProj worldviewproj_matrix
param_named ambient float4 1 1 1 1
}
scene_blend dest_colour zero
texture_unit
{
texture RustedMetal.jpg
}

}
}
}
Note that if you’re using texture shadows you have the additional option of using
[Integrated Texture Shadows], page 189 rather than being forced to use this explicit se-
quence - allowing you to compress the number of passes into a much smaller number at the
expense of defining an upper number of shadow casting lights. In this case the ’additive’
aspect of the shadow technique just affects the colour of the shadow texture and it’s up to
you to combine the shadow textures in your receivers however you like.

Static Lighting
Despite their power, additive lighting techniques have an additional limitation; they do not
combine well with pre-calculated static lighting in the scene. This is because they are based
on the principle that shadow is an absence of light, but since static lighting in the scene
already includes areas of light and shadow, additive lighting cannot remove light to create
new shadows. Therefore, if you use the additive lighting technique you must either use
it exclusively as your lighting solution (and you can combine it with per-pixel lighting to
create a very impressive dynamic lighting solution), or you must use [Integrated Texture
Shadows], page 189 to combine the static lighting according to your chosen approach.
Chapter 8: Animation 197

8 Animation
OGRE supports a pretty flexible animation system that allows you to script animation for
several different purposes:
Section 8.1 [Skeletal Animation], page 197
Mesh animation using a skeletal structure to determine how the mesh deforms.

Section 8.3 [Vertex Animation], page 198


Mesh animation using snapshots of vertex data to determine how the shape of
the mesh changes.

Section 8.4 [SceneNode Animation], page 203


Animating SceneNodes automatically to create effects like camera sweeps, ob-
jects following predefined paths, etc.

Section 8.5 [Numeric Value Animation], page 203


Using OGRE’s extensible class structure to animate any value.

8.1 Skeletal Animation


Skeletal animation is a process of animating a mesh by moving a set of hierarchical bones
within the mesh, which in turn moves the vertices of the model according to the bone
assignments stored in each vertex. An alternative term for this approach is ’skinning’.
The usual way of creating these animations is with a modelling tool such as Softimage
XSI, Milkshape 3D, Blender, 3D Studio or Maya among others. OGRE provides exporters
to allow you to get the data out of these modellers and into the engine See Section 4.1
[Exporters], page 158.

There are many grades of skeletal animation, and not all engines (or modellers for that
matter) support all of them. OGRE supports the following features:
• Each mesh can be linked to a single skeleton
• Unlimited bones per skeleton
• Hierarchical forward-kinematics on bones
• Multiple named animations per skeleton (e.g. ’Walk’, ’Run’, ’Jump’, ’Shoot’ etc)
• Unlimited keyframes per animation
• Linear or spline-based interpolation between keyframes
• A vertex can be assigned to multiple bones and assigned weightings for smoother skin-
ning
• Multiple animations can be applied to a mesh at the same time, again with a blend
weighting

Skeletons and the animations which go with them are held in .skeleton files, which are
Chapter 8: Animation 198

produced by the OGRE exporters. These files are loaded automatically when you create an
Entity based on a Mesh which is linked to the skeleton in question. You then use Section 8.2
[Animation State], page 198 to set the use of animation on the entity in question.
Skeletal animation can be performed in software, or implemented in shaders (hardware
skinning). Clearly the latter is preferable, since it takes some of the work away from the
CPU and gives it to the graphics card, and also means that the vertex data does not need
to be re-uploaded every frame. This is especially important for large, detailed models. You
should try to use hardware skinning wherever possible; this basically means assigning a ma-
terial which has a vertex program powered technique. See hundefinedi [Skeletal Animation
in Vertex Programs], page hundefinedi for more details. Skeletal animation can be com-
bined with vertex animation, See Section 8.3.3 [Combining Skeletal and Vertex Animation],
page 202.

8.2 Animation State


When an entity containing animation of any type is created, it is given an ’animation state’
object per animation to allow you to specify the animation state of that single entity (you
can animate multiple entities using the same animation definitions, OGRE sorts the reuse
out internally).

You can retrieve a pointer to the AnimationState object by calling En-


tity::getAnimationState. You can then call methods on this returned object to update
the animation, probably in the frameStarted event. Each AnimationState needs to be
enabled using the setEnabled method before the animation it refers to will take effect,
and you can set both the weight and the time position (where appropriate) to affect the
application of the animation using correlating methods. AnimationState also has a very
simple method ’addTime’ which allows you to alter the animation position incrementally,
and it will automatically loop for you. addTime can take positive or negative values (so
you can reverse the animation if you want).

8.3 Vertex Animation


Vertex animation is about using information about the movement of vertices directly to
animate the mesh. Each track in a vertex animation targets a single VertexData instance.
Vertex animation is stored inside the .mesh file since it is tightly linked to the vertex
structure of the mesh.
There are actually two subtypes of vertex animation, for reasons which will be discussed
in a moment.
Section 8.3.1 [Morph Animation], page 201
Morph animation is a very simple technique which interpolates mesh snapshots
along a keyframe timeline. Morph animation has a direct correlation to old-
school character animation techniques used before skeletal animation was widely
used.
Chapter 8: Animation 199

Section 8.3.2 [Pose Animation], page 201


Pose animation is about blending multiple discrete poses, expressed as offsets
to the base vertex data, with different weights to provide a final result. Pose
animation’s most obvious use is facial animation.

Why two subtypes?


So, why two subtypes of vertex animation? Couldn’t both be implemented using the same
system? The short answer is yes; in fact you can implement both types using pose animation.
But for very good reasons we decided to allow morph animation to be specified separately
since the subset of features that it uses is both easier to define and has lower requirements
on hardware shaders, if animation is implemented through them. If you don’t care about
the reasons why these are implemented differently, you can skip to the next part.

Morph animation is a simple approach where we have a whole series of snapshots of


vertex data which must be interpolated, e.g. a running animation implemented as morph
targets. Because this is based on simple snapshots, it’s quite fast to use when animating an
entire mesh because it’s a simple linear change between keyframes. However, this simplistic
approach does not support blending between multiple morph animations. If you need
animation blending, you are advised to use skeletal animation for full-mesh animation, and
pose animation for animation of subsets of meshes or where skeletal animation doesn’t fit -
for example facial animation. For animating in a vertex shader, morph animation is quite
simple and just requires the 2 vertex buffers (one the original position buffer) of absolute
position data, and an interpolation factor. Each track in a morph animation refrences a
unique set of vertex data.

Pose animation is more complex. Like morph animation each track references a single
unique set of vertex data, but unlike morph animation, each keyframe references 1 or more
’poses’, each with an influence level. A pose is a series of offsets to the base vertex data,
and may be sparse - i.e. it may not reference every vertex. Because they’re offsets, they
can be blended - both within a track and between animations. This set of features is very
well suited to facial animation.

For example, let’s say you modelled a face (one set of vertex data), and defined a set of
poses which represented the various phonetic positions of the face. You could then define
an animation called ’SayHello’, containing a single track which referenced the face vertex
data, and which included a series of keyframes, each of which referenced one or more of the
facial positions at different influence levels - the combination of which over time made the
face form the shapes required to say the word ’hello’. Since the poses are only stored once,
but can be referenced may times in many animations, this is a very powerful way to build
up a speech system.
Chapter 8: Animation 200

The downside of pose animation is that it can be more difficult to set up, requiring
poses to be separately defined and then referenced in the keyframes. Also, since it uses
more buffers (one for the base data, and one for each active pose), if you’re animating in
hardware using vertex shaders you need to keep an eye on how many poses you’re blending
at once. You define a maximum supported number in your vertex program definition,
via the includes pose animation material script entry, See hundefinedi [Pose Animation in
Vertex Programs], page hundefinedi.
So, by partitioning the vertex animation approaches into 2, we keep the simple morph
technique easy to use, whilst still allowing all the powerful techniques to be used. Note
that morph animation cannot be blended with other types of vertex animation on the same
vertex data (pose animation or other morph animation); pose animation can be blended
with other pose animation though, and both types can be combined with skeletal animation.
This combination limitation applies per set of vertex data though, not globally across the
mesh (see below). Also note that all morph animation can be expressed (in a more complex
fashion) as pose animation, but not vice versa.

Subtype applies per track


It’s important to note that the subtype in question is held at a track level, not at the
animation or mesh level. Since tracks map onto VertexData instances, this means that if
your mesh is split into SubMeshes, each with their own dedicated geometry, you can have
one SubMesh animated using pose animation, and others animated with morph animation
(or not vertex animated at all).

For example, a common set-up for a complex character which needs both skeletal and
facial animation might be to split the head into a separate SubMesh with its own geometry,
then apply skeletal animation to both submeshes, and pose animation to just the head.

To see how to apply vertex animation, See Section 8.2 [Animation State], page 198.

Vertex buffer arrangements


When using vertex animation in software, vertex buffers need to be arranged such that
vertex positions reside in their own hardware buffer. This is to avoid having to upload all
the other vertex data when updating, which would quickly saturate the GPU bus. When
using the OGRE .mesh format and the tools / exporters that go with it, OGRE organises
this for you automatically. But if you create buffers yourself, you need to be aware of the
layout arrangements.

To do this, you have a set of helper functions in Ogre::Mesh. See API Reference entries for
Ogre::VertexData::reorganiseBuffers() and Ogre::VertexDeclaration::getAutoOrganisedDeclaration().
The latter will turn a vertex declaration into one which is recommended for the usage
you’ve indicated, and the former will reorganise the contents of a set of buffers to conform
to that layout.
Chapter 8: Animation 201

8.3.1 Morph Animation


Morph animation works by storing snapshots of the absolute vertex positions in each
keyframe, and interpolating between them. Morph animation is mainly useful for ani-
mating objects which could not be adequately handled using skeletal animation; this is
mostly objects that have to radically change structure and shape as part of the animation
such that a skeletal structure isn’t appropriate.

Because absolute positions are used, it is not possible to blend more than one morph
animation on the same vertex data; you should use skeletal animation if you want to include
animation blending since it is much more efficient. If you activate more than one animation
which includes morph tracks for the same vertex data, only the last one will actually take
effect. This also means that the ’weight’ option on the animation state is not used for
morph animation.

Morph animation can be combined with skeletal animation if required See Section 8.3.3
[Combining Skeletal and Vertex Animation], page 202. Morph animation can also be im-
plemented in hardware using vertex shaders, See hundefinedi [Morph Animation in Vertex
Programs], page hundefinedi.

8.3.2 Pose Animation


Pose animation allows you to blend together potentially multiple vertex poses at different
influence levels into final vertex state. A common use for this is facial animation, where
each facial expression is placed in a separate animation, and influences used to either blend
from one expression to another, or to combine full expressions if each pose only affects part
of the face.

In order to do this, pose animation uses a set of reference poses defined in the mesh,
expressed as offsets to the original vertex data. It does not require that every vertex has
an offset - those that don’t are left alone. When blending in software these vertices are
completely skipped - when blending in hardware (which requires a vertex entry for every
vertex), zero offsets for vertices which are not mentioned are automatically created for you.

Once you’ve defined the poses, you can refer to them in animations. Each pose animation
track refers to a single set of geometry (either the shared geometry of the mesh, or dedicated
geometry on a submesh), and each keyframe in the track can refer to one or more poses,
each with its own influence level. The weight applied to the entire animation scales these
influence levels too. You can define many keyframes which cause the blend of poses to
change over time. The absence of a pose reference in a keyframe when it is present in a
neighbouring one causes it to be treated as an influence of 0 for interpolation.
Chapter 8: Animation 202

You should be careful how many poses you apply at once. When performing pose
animation in hardware (See hundefinedi [Pose Animation in Vertex Programs], page hunde-
finedi), every active pose requires another vertex buffer to be added to the shader, and in
when animating in software it will also take longer the more active poses you have. Bear in
mind that if you have 2 poses in one keyframe, and a different 2 in the next, that actually
means there are 4 active keyframes when interpolating between them.

You can combine pose animation with skeletal animation, See Section 8.3.3 [Combin-
ing Skeletal and Vertex Animation], page 202, and you can also hardware accelerate the
application of the blend with a vertex shader, See hundefinedi [Pose Animation in Vertex
Programs], page hundefinedi.

8.3.3 Combining Skeletal and Vertex Animation


Skeletal animation and vertex animation (of either subtype) can both be enabled on the
same entity at the same time (See Section 8.2 [Animation State], page 198). The effect of this
is that vertex animation is applied first to the base mesh, then skeletal animation is applied
to the result. This allows you, for example, to facially animate a character using pose vertex
animation, whilst performing the main movement animation using skeletal animation.

Combining the two is, from a user perspective, as simple as just enabling both animations
at the same time. When it comes to using this feature efficiently though, there are a few
points to bear in mind:
bullet [Combined Hardware Skinning], page 202
bullet [Submesh Splits], page 203

Combined Hardware Skinning


For complex characters it is a very good idea to implement hardware skinning by including
a technique in your materials which has a vertex program which can perform the kinds
of animation you are using in hardware. See hundefinedi [Skeletal Animation in Vertex
Programs], page hundefinedi, hundefinedi [Morph Animation in Vertex Programs], page hun-
definedi, hundefinedi [Pose Animation in Vertex Programs], page hundefinedi.

When combining animation types, your vertex programs must support both types of
animation that the combined mesh needs, otherwise hardware skinning will be disabled.
You should implement the animation in the same way that OGRE does, ie perform vertex
animation first, then apply skeletal animation to the result of that. Remember that the
implementation of morph animation passes 2 absolute snapshot buffers of the from & to
keyframes, along with a single parametric, which you have to linearly interpolate, whilst
pose animation passes the base vertex data plus ’n’ pose offset buffers, and ’n’ parametric
weight values.
Chapter 8: Animation 203

Submesh Splits
If you only need to combine vertex and skeletal animation for a small part of your mesh,
e.g. the face, you could split your mesh into 2 parts, one which needs the combination and
one which does not, to reduce the calculation overhead. Note that it will also reduce vertex
buffer usage since vertex keyframe / pose buffers will also be smaller. Note that if you use
hardware skinning you should then implement 2 separate vertex programs, one which does
only skeletal animation, and the other which does skeletal and vertex animation.

8.4 SceneNode Animation


SceneNode animation is created from the SceneManager in order to animate the movement
of SceneNodes, to make any attached objects move around automatically. You can see this
performing a camera swoop in Demo CameraTrack, or controlling how the fish move around
in the pond in Demo Fresnel.

At it’s heart, scene node animation is mostly the same code which animates the under-
lying skeleton in skeletal animation. After creating the main Animation using SceneMan-
ager::createAnimation you can create a NodeAnimationTrack per SceneNode that you want
to animate, and create keyframes which control its position, orientation and scale which can
be interpolated linearly or via splines. You use Section 8.2 [Animation State], page 198 in the
same way as you do for skeletal/vertex animation, except you obtain the state from Scene-
Manager instead of from an individual Entity.Animations are applied automatically every
frame, or the state can be applied manually in advance using the applySceneAnimations()
method on SceneManager. See the API reference for full details of the interface for config-
uring scene animations.

8.5 Numeric Value Animation


Apart from the specific animation types which may well comprise the most common uses of
the animation framework, you can also use animations to alter any value which is exposed
via the [AnimableObject], page 203 interface.

AnimableObject
AnimableObject is an abstract interface that any class can extend in order to provide access
to a number of [AnimableValue], page 204s. It holds a ’dictionary’ of the available animable
properties which can be enumerated via the getAnimableValueNames method, and when
its createAnimableValue method is called, it returns a reference to a value object which
forms a bridge between the generic animation interfaces, and the underlying specific object
Chapter 8: Animation 204

property.

One example of this is the Light class. It extends AnimableObject and provides Ani-
mableValues for properties such as "diffuseColour" and "attenuation". Animation tracks
can be created for these values and thus properties of the light can be scripted to change.
Other objects, including your custom objects, can extend this interface in the same way to
provide animation support to their properties.

AnimableValue
When implementing custom animable properties, you have to also implement a number
of methods on the AnimableValue interface - basically anything which has been marked
as unimplemented. These are not pure virtual methods simply because you only have to
implement the methods required for the type of value you’re animating. Again, see the
examples in Light to see how this is done.