You are on page 1of 102

will Vr

cg art?
Explore the
potential of
virtual reality
October 2015

matte painting
star wars
Pro training to recreate
an Episode VII scene


of resources

Video walkthroughs
Textures and meshes
Models and setup files

the rise of

time m

How TVs greaTesT

VFX are creaTed
Game of Thrones
The flash
walkinG dead






A S ge 6
n d pa
o A rn to


Discover how small-screen VFX
is making a big impression

SubScribe &
SAVe up to 59%

This issue we gather some of the top talent

working in TV VFX; from The Flash to
Game of Thrones, discover why small-screen
CG is on the rise. Plus, on page 36, Artifex
Studios Adam Stern reveals how he created
this issues cover the Time Marine from hit
sci-fi show Continuum. The TV VFX theme
flows into this issues tutorials too, with
training for matte painting (page 54), the
creation of a man made of fire (page 60) and
advice to simulate a famous VFX shot from
Game of Thrones (page 64). Looking further
ahead, in this issues Develop (page 87) we
step into the world of VR and discover what
this new technology has in store for CG
artists, and maybe the future of VFX.

IanDean, editor





3D WorlD October 2015

ZBrush modelling
How to quickly model
a character and more
advice in Artist Q&A,
turn to page 30

Issue 199


Our complete line-up for this

months 3D World

Get the latest magazine for

free in our current offer or
download a back issue on
iPad and iPhone today!

6 Free downloads

Get your hands on 5GB of assets

via our Vault download system

8 artist showcase

Discover the best new digital art

and more from the CG world

17 community
18 sIggraph anImatIon
Is the best work getting seen?

22 studio proFile
We visit new CG studio AxisVFX

10 artIst showcase

The best new art from the CG community

22 stuDIo profIle: aXIsVfX

Creating movie-quality FX on a TV schedule

24 tomorrows worlD

Inside Rodeo FXs VFX for Tomorrowland

26 short cuts
The making of Taking The Plunge

30 artist Q&a

All your software queries solved by

our panel of professional artists

36 the cg of contInuum
40 the rIse of tV VfX

53 tutorials

Improve your CG art skills

80 3d maker

Explore 3D print art and trends

87 develop

Theory, research and reviews

7 neXt month
28 european subscrIptIons
79 us subscrIptIons
98 DIgItal back Issues

36 makIng contInuums tIme marIne

Artifex discusses its VFX for the sci-fi show, plus Adam Stern shares his process
3D WorlD October 2015

40 the rise oF tv vFX

How TVs greatest VFX shows are created

54 star wars matte paIntIng
Discover how to create a stunning
Episode VII-inspired image

60 master the art of fIre

How to create a man made of fire in
3ds Max and FumeFX

64 create an epIc VfX shot

54 create a star wars matte paIntIng

60 master the art of fIre

Double Negatives Saby Menyhei walks you through this Star Wars VII-inspired image

Expert advice for 3ds Max and FumeFX

Recreate a still in the style of Game

of Thrones Battle of Blackwater

70 get starteD In houDInI

Bemused by the procedural tool?
These beginner tips should help

74 DetaIl a character
Prepare an armoured knight
for cinematic production

64 recreate an epIc game of thrones VfX shot


74 DetaIl a character

Simulate the massive explosion from the Battle of Blackwater

Prepare a model for cinematic production

3d maker
80 growth InDustry
Michael Winstones organic artworks

82 3D prInt a prosthetIc hanD

Put your printing skills to medical use

80 growth InDustry

3D printings artistic success story

82 DIscoVer how 3D prIntIng Is lenDIng a hanD

Download a complete prosthetic hand model this issue

88 nuke technIques
Discover how to use performance
timers to speed up your scripts

90 buIlDIng a VIrtual realIty

How Nurulize is creating a new
breed of virtual environments

96 reVIew: renDa pw-e7f

Overclockers enters the workstation market

96 reVIew: renDa pw-e7f
Overclockers entry to the workstation
market is a robust machine

97 reVIew: unfolD3D 9
The UV mapping tool gets a strong
update with some great new tools

97 reVIew: photoscan 1.1.6

Agisofts photogrammetry package
proves a useful workflow tool

100 my InspIratIon
Saizen Medias Davice Bianca

90 buIlDIng a new VIrtual realIty

97 reVIew: unfolD3D 9

Discover how Nurulize is creating a new type of virtual environment

3D WorlD October 2015

The UV mapping tool gets a strong update


5gb of
video &

in the vault


Follow the link to download your free files
Recreate the
famous Star Wars
VII trailer shot!

get your
Youre three steps away from this
issues video training and files
1. Go to the website
Type this into your browsers address bar:
2. Find the Files You want
Search the list of free resources to find
the video and files you want.
3. download what You need
Click the Download buttons and your
files will save to your PC or Mac.


there are more files, art and

resources waiting online

video+setup files+references matte painting

Download the Nuke and 3ds Max setup files plus reference materials

video+setup files fire sim

Resources for 3ds Max and FumeFX

Video Follow Josh Parks second Nuke

tutorial with his video training
Files Download the training for this
issues Artist Q&A tutorials
artwork Download the extra tutorial
images to accompany Showcase
Model Download the 3D print-ready
hand model and tutorial files

video Houdini tecHniques

video+setup files vfX

Video tips to get started in Houdini

Video and setup files for 3ds Max

3D WorlD October 2015

video+teXtures figure

Resources to model a ZBrush character

ib e t ! s
c r g s u e ub
bs to is ld-s
su day e X t wor
to r n .ly/3
u it
yo w.b

contact us
3d w o r l d Mag azi n e
future publisHing
Quay House, The Ambury,
Bath, BA1 1UA
telephone: +44 (0) 1225 442244
twitter: @3DWorldMag


Pr o d ucti o n & d i str i b uti o n

editor Ian Dean

production controller Nola Cokely

art editor Darren Phillips

production manager Mark Constance

production editor Felicity Barr

printed in tHe uk by:

group content editor Tom May

William Gibbons & Sons Ltd on behalf of Future.

commissioning editor Julia Sagar

distributed by:

deputy commissioning editor Sammy Maine

Seymour Distribution Ltd, 2 East Poultry Avenue,

staff writers Alice Pattillo, Dominic Carter

London EC1A 9PT, Tel: 0207 429 4000

issue 200


overseas distribution by:

c ont rib utors

Seymour International

Aiman Akhtar, Cirstyn Bech-Yagher, Anita Brown,

Pietro Chiovaro, Carlos Cruz, Vikrant J Dalal,

ci r cul ati o n

Alex Farrell, Mike Griggs, Kerrie Hughes, Steve

trade marketing manager Juliette Winyard

Jarratt, Francis-Xavier Martins, Kieran Mckay,

07551 150 984

Kuksoom Middleton, James Morris, Josh Parks,

Rob Redman, Jim Thacker, Alvin Weetman

li cen si n g
international director

Ma na geM ent

Regina Erak

content & marketing director Nial Ferguson

+44 (0)1225 442244 Fax +44 (0)1225 732275

Head of content & marketing, pHotograpHy,

creative & design Matthew Pierce

sub scr i Pti o n s

group editor-in-cHief Dan Oliver

uk reader order line & enquiries:

group art director Rodney Dive

0844 848 2852

Revealed! The 200 greatest

VFX films of all time!

overseas reader order line & enquiries:


+44 (0)1604 251045

account manager Suzanne Smith

online enquiries:

+44 (0) 207 042 4122


advertising manager Sasha McGregor

(0)1225 687675


Future is an award-winning international media group and leading

digital business. We reach more than 49 million international consumers
a month and create world-class content and advertising solutions
for passionate consumers online, on tablet & smartphone and in print.
Future plc is a public
company quoted
on the London
Stock Exchange
(symbol: FUTR).

on sale 9 september

Chief executive Zillah Byng-Maddick

Non-executive chairman Peter Allen
Chief financial officer Richard Haley
Tel +44 (0)207 042 4000 (London)
Tel +44 (0)1225 442 244 (Bath)

Get your special 200th issue!

SubScribe today:

All contents copyright 2015 Future Publishing Limited or published under licence. All rights reserved. No part of this magazine
may be reproduced, stored, transmitted or used in any way without the prior written permission of the publisher. Future
Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Registered office: Quay
House, The Ambury, Bath, BA1 1UA. All information contained in this publication is for information only and is, as far as we are
aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information.
You are advised to contact manufacturers and retailers directly with regard to the price and other details of products or services
referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not
responsible for their contents or any changes or updates to them. If you submit unsolicited material to us, you automatically
grant Future a licence to publish your submission in whole or in part in all editions of the magazine, including licensed
editions worldwide and in any physical or digital format throughout the world. Any material you submit is sent at your risk and,
although every care is taken, neither Future nor its employees, agents or subcontractors shall be liable for loss or damage.

3D WorlD October 2015



The best digital art from
the CG community

get published
email your Cg art to

Visit the online Vault to download

extra process art for these projects:

3D WorlD October 2015




The most difficult

part of the piece was
creating the feathers.
David used Hair Farm
to distribute them

artist David Ferreira

software ZBrush, 3ds Max,
V-Ray, Marvelous Designer,
Nuke, After Effects, Photoshop
I really like the idea of a dimension between
awake and asleep where these creatures exist,
capturing dreams as if they were living entities
themselves, says David Ferreira of his image
Dreamcatchers Tribe. Seeing that idea gain
form through the characters was great. I didnt
take it all the way, but I managed to come
up with some interesting concepts, like the
idea that there are good dreamcatchers, who
capture nightmares, and bad dreamcatchers,
who capture sweet dreams that make them fat.
David worked on the image over a 30-day
period, taking close to 140 hours in total, while
continuing his professional work as a freelance
CG generalist and instructor. The clothing
of the big blue character was done
with Marvelous Designer, he says.
After sculpting the character in the
T-pose, I did a quick rig in 3ds Max
and animated him moving into pose
so I could simulate his clothes. The most
complex detail was the feathers on one of the
characters, which were created using planes
with feather alphas. I distributed them with
[3ds Max plug-in] Hair Farm, using several grey
maps for size and density control.
You can see more of Davids work at

3d world view
I love the way the
look of each character
seems to reflect its
personality. Theyre a
winning mix of cute
and scary.
feliCity barr

Production editor

3D WorlD October 2015



Berto says that taking

part in art workshops
helps to improve his
workflow and creativity.
He also likes to spend
a few hours in ZBrush
sketching out ideas
before embarking on
a new project

Berto Souza
software ZBrush, 3ds Max,
Mudbox, Marvelous Designer
Berto Souza began experimenting with 3D
five years ago, taking part in courses and
workshops. He now works at TV Globo in Rio
de Janeiro as a character artist, but continues
to attend group sessions: work on the
Maschinen Project began during one of
artist Bruno Cmaras workshops.
After the workshop ended, Bruno
continued working on the project for a further
three months. It was really awesome to create
this cyberpunk girl and all her accessories.
They look really heavy for her to carry, he
says. I think the strong points of the work are
the metals and the modelling of the face I
felt I achieved good results with both aspects.
Using ZBrush and 3ds Max for most of the
work, Bruno took to Mudbox for the textures
and Marvelous Designer for the clothes,
before doing the final composition of the
image in Photoshop.
Citing other artists as one of his sources
of inspiration, Bruno says that its important
to stay in touch with people in the industry.
These relationships [provide you with]
knowledge which means better work, he says.
Im always doing workshops to improve my
workflow and exchange experiences.
You can see more of Bertos work at
3D WorlD October 2015

I think the strong

points of the work are
the metals and the
modelling of the face



Far Far Away is the first

part in a three-part
Imagination series

far far away


Peter used Fusion 360,

Autodesks cloud-based
modelling platform, for
the first time during the
creation of Far Far Away.
I know people use it for
professional services, but it
allowed me to create a few
extra details, he says

Peter Nowacki
software 3ds Max, V-Ray,
Marvelous Designer
Its astonishing that the image took two years
to finish, says Warsaw-based artist Peter
Nowacki, who produced Far Far Away,' as a
celebration of childrens imaginations. Its
the first part of a three-part project. Children
are now focusing on video games instead of
playing outside and I wanted to showcase a
time when their imaginations were all they had.
Although he used 3ds Max and V-Ray
for the work, Peter also dabbled with other
engines, including OctaneRender and Corona
Renderer. I personally have more experience
in V-Ray, but they were great, he says.
For cloth simulations, Peter used Marvelous
Designer. Its just perfect: user-friendly and
fast! he says. For extra details, I created
V-Ray hair and emitted a few particle systems.
Peter also used Fusion 360, Autodesks cloudbased CAD tool, for the first time during this
project. I know that people are using it for
professional services, but with my skills it
allowed me to create a few extra details.
Peter says its the final touches to a project
that he relishes the most. I love adding
details, tweaking final renders, merging render
passes and publishing the image. Feedback is
the most powerful motivator!
You can see more of Peters work at

3D WorlD October 2015

3d world view
This certainly isnt a
setting in which you
normally expect to
see a lightsaber. Peter
uses it to symbolise
the antique
imagination that our
kids are losing.
ian dean



and groot

I think the key to personal

growth is to have some sort
of plan when practising
and creating personal art

Antone Magdy
software ZBrush, 3ds Max,
DDO, Photoshop
Like many other artists, Antone Magdy
embarked on this piece to improve his
professional skills. I think the key to personal
growth is to have some sort of plan when
practising and creating personal art, he says.
One of the challenges Antone set himself
was learning a new piece of software: DDO,
Quixels texturing tool. I was eager to learn
about DDO and then take it all the way to be
a real-time model, he says.
Having previously modelled characters for
animated TV series, Antone currently works
as a 3D character artist at Snappers in Egypt,
where he focuses on facial blendshapes for
in-game and cinematic models on a range of
AAA titles. Rocket and Groot is based on
the artwork of Javier Burgos Magdy says
that hes always been a fan of Rovios lead
artist and proved a lot of fun to sculpt.
Im inspired by seeing the huge amount
of work that artists around the world create,
Antone enthuses. Its always a big part
of my day to scroll through art forums. It
always gives me that push to be better to
challenge myself to try a lot of new things.
You can see more of Antones work at

3d world view
Antone takes Javier
Burgoss cartoony
style, and perfectly
translates it into 3D.
Rocket and Groot
never looked cuter.
darren phillips

Art editor

3D WorlD October 2015



News and views from around the
international CG community

20 The bear essenTials

Behind the scenes on Ted 2s animation

18 are The besT animaTed shorTs geTTing awarded?

22 sTudio profile: axisvfx

24 discover The vfx behind Tomorrowland

26 shorT cuTs

This years Siggraph nominees and past winners share their thoughts

Creating movie-quality VFX for Doctor Who

GET publiShEd
Email yOur CG arT TO

Visit the online Vault to download

extra process art for these projects:

Rodeo FX reveals the secrets of its rocket-propelled VFX for Brad Birds sci-fi movie
3D WorlD October 2015


The making of student short Taking The Plunge

Communit y
The big issue

The big issue

Do festivals rewarD the right work?

As Siggraphs Computer Animation Festival hits Los Angeles, Tom May
asks industry insiders their views on the festival selection process

very August the industrys great

and good get together for
Siggraph, the annual gathering of
global CG and VFX artists. And all eyes
are on the work being showcased in
the Computer Animation Festival, from
students and veterans alike.
But who gets selected for such
events and is the right kind of work
being rewarded? We spoke with some
of those whose work is being screened
at Siggraph to find out more.
I think the majority of the
work chosen for animation
festivals are projects that
make anyone in this industry
go wow, says Rick Thiele of Red
Knuckles. That was clearly the case with
his studios CG short Dark Noir, with a
storyline crowd-sourced by Facebook
fans worldwide. But other, less
headline-grabbing work can get
missed. Personally, I wish more
student films would be chosen; the

underdogs who dont have a massive

studio and infrastructure behind them,
but still manage to produce very strong
and influential animations, he adds.
Siggraph makes a point of honouring
student work of course, such as Jinxy
Jenkins and Lucky Lou, made at
Ringling College of Art and Design,
which won Best Computer Animated
Short. Michael Bidinger, who codirected the film with Michelle Kwon
and is now interning at Pixar Studios,
was obviously pleased. For
the industry, awards are a big
influence in what gets
watched and what gets
taught, he says. They can shape the
future of whats created. But he feels
its important theyre selected for the
right reasons. Its not a good thing if
the judging criteria for Best in Show
awards become Most Obvious
Three-Act Structure or Most
Conventions Broken.
3D WorlD October 2015


Michael also feels some festivals put

too much emphasis on the technical
side. The work I like to see rewarded
are those films that take chances with
honest intent, he says. Theres a lot of
work out there meant to push limits
but not in the technical aspects.

Its not a good thing if the judging criteria

becomes Most Obvious Three-Act
Structure or Most Conventions Broken
Michael Bidinger, animation intern, Pixar Animation Studios

Theres so much experimental

animation that goes unrecognised
because its not industry standard, or
up to par with feature animation
aesthetics. But I think everyone
responds well to stories and films with
heart, no matter what they look like.

indusTry insiders
Thoughts & opinions from the experts

(Left) Jinxy Jenkins &

Lucky Lou, winner of
Siggraph 2015s Best
Computer Animated
xxx xxDark
xxx Noir
xxx xx
xxx xxx xx xxx xxx xx
xxx xx xxx
xxx xx xxx xxx xx xxx xxx xx
a crowdxxxsourced
xxx xx xxx
xxx xx xxx xxx xx

Not all festivals have the same selection

criteria either, and theres certainly no
foolproof formula for getting shown.
Despite winning Best in
Show at Siggraph, my film
Citius, Altius, Fortius didnt
even get invited to a lot of
other festivals, points out German
animator Felix Deimann. In my opinion
theres always a little luck involved.
Its also worth doing some research
to find out what kind of work a
particular festival favours.
Certain creative categories
dont appreciate the kind of
work we do, while others,
more technical and visual-oriented
ones, love it, reveals Alex Sndor
Rabb, MD of Digic Pictures, which
created the Assassins Creed Unity
trailer, winner of the Best Game award.
We know that our work is usually not
about a great creative idea but about
how we visualise and tell a story, Alex
adds. Our intention is to attach
viewers to the characters, and get an
emotional response.
If entering festivals sounds like a
lottery, it can pay huge dividends. For
me personally, festivals and awards
have been a huge help, says Felix.
Ive had the opportunity to travel to
great places and meet a lot of great
people. For young professionals like
me its a good way to get connected
with people from the industry.

Theres nothing like the buzz of getting

your film screened at an event of your
peers, let alone winning a prestigious
award. Awards represent the
audience, says Pierre Jury,
one of four directors of L3.0, a
charming film about a lonely
robot made at animation school ISART
Digital. The short won the team Best
Student Project at Siggraph. Winning
one means a lot to us because it shows
that people liked the film, says Pierre,
before adding: This is probably the
best award in the world!
While awards can be a nice icing on
the cake, though, they shouldnt be
mistaken for the cake itself.
Its always nice to have the
work recognised in a formal
scheme, says Pablo Grillo,
Framestores animation supervisor on
Paddington, which won Best Live
Effects for a Live-Action Feature Film at
Siggraph. But while awards can be
seductive theyre to be taken with a
pinch of salt.
So many pieces of work in
competition deserve merit for
different reasons and theres often such
a diversity of great work that its hard
to single any one piece out. For me the
real reward comes from the experience
of doing the work and sharing that
experience with the crew.
For more details of the Siggraph
fYi Animation Festival, visit
3D WorlD October 2015


Rick Thiele

Michael BidingeR

alex SndoR RaBB

Creative director,
Red Knuckles

Animation intern, Pixar

Animation Studios

Managing director and

producer, Digic Pictures

The key to a
successful festival
entry is planning
your time well and
meeting initial
approval deadlines,
although this
sometimes requires
a loud voice and
thick skin. Story-wise,
telling, showing or
explaining everything
you can of the plot
and characters in
detail seems right
on paper. But once
it starts being
translated to the
screen, you often
find its better to
hint and suggest
instead, leaving a lot
of the story up to the
viewers imagination
and assumptions.

By no means did
we know that our
film would end up
the way it is. It was
a pretty crazy ride
and by the end of it,
we had no objective
sense of what
condition the film
was in. Its a miracle
that what we landed
on has had such
a warm reception
with audiences.
So our advice to
anyone trying to do
something similar
is to let go of the
cart sometimes
and let things play
out how they will.
There were a lot of
lucky accidents
while making this
film. The recognition
is immensely
comforting and

Winning an award
means that were
gratified that all
the efforts weve
put into a project
are recognised.
Of course, it also
serves to attract
new clients and
helps broaden our
existing relationships.
There could be a
danger that getting
an award makes
you think you dont
have to improve and
learn new things.
However, you have
to constantly push
creative boundaries;
in this fast-changing
industry, youre only
as good as your
last work. It doesnt
matter that youve
won an award in the
past if you drop the
ball and dont deliver
in the present.

L3.0 is a genuinely heart-rending short made by students at ISART

Digital, the video game and 3D animation school in Paris

To get the other

bears to match
Ted, their muzzles
and eyebrows were
replaced using 2D
projection in Nuke

Communit y
VFX interview

VFX interView

A Rude AwAkening down undeR

Alice Pattillo finds out how the crude talking teddy bear was brought
to life by Australian studio Iloura in the comedy sequel, Ted 2

ith a larger than life

personality, its quite the
challenge to match Seth
MacFarlanes charismatic voice to a
relatively unexpressive bears face, but
Australian VFX and animation studio,
Iloura managed to hit the nail on the
head with Teds second coming.
Fortunately, the studio also worked
on the original movie, so knew what
to expect in terms of animation and
pipeline. For initial motion capture
(Seth MacFarlane not only voices, but
acts Teds movements) a system called
Moven was used on set essentially
a jacket that provided reference for
Seths upper body, giving no facial
feedback or legs. This meant the
team used captured video reference
as well, using this as the main point
of comparison as it contained a lot of
subtle detail that sometimes the mocap
didnt pick up.

Glenn Melenhorst
Glenn is VFX Supervisor
at Iloura, an animation
and visual effects studio
located in Australia.

I feel Ted is more of a keyframe show than a mocap

show, says VFX Supervisor,
Glenn Melenhorst, the
mocap provided our team with
fantastic reference, and our animators
were able to add the detail.
With three years between the
first instalment and its sequel, huge

We completely reworked our cloth

pipeline, using Marvelous Designer to
build as well as simulate our clothing
Ted required multiple
lighting passes to
ensure his grubby
fur looked realistic

software and hardware improvements

had been made. In Ted 1 we used
a hybrid pipeline of ray tracing in
V-Ray and Reyes in 3Delight, reveals
Glenn. This time round we still opted
for a hybrid approach but pushed

3D WorlD October 2015


more emphasis on 3Delight. We also

completely reworked our cloth pipeline,
using Marvelous Designer to build as
well as simulate our clothing. This gave
a very robust and realistic result.
Early on in Ted the team established
the golden rules for Teds acting.
These combat Teds limited range of
facial expressions, we had to develop
our own language in terms of how
Seths performance applied to Teds
face. For example, subtle shifts in
the brows and cheek area were often
critical to giving Ted a real life talking
teddy bear feel without making his
face overly animated.
This time around, Ted required a lot
more complex animation. Ted wears
more clothes in this movie and interacts
more with his environment, items
and food. So we needed to work a lot
more on simulations and FX, natural
interactions and so on.
Glenn admits the shots where Ted
fights the goose was probably the most
challenging. Its a long shot, we had
a tight turn-around on it and Seth was
actively blocking out ideas through
to the last week. To achieve it in time,
we divided it up between animators
and spliced the acting together, which
helped but it had ramifications on the
pipeline; we had to re-render as new
animation came down the pipe new
composites. So Im proud of our team,
they pulled it off on time!
For more about Iloura and to see their
FYi showreel visit

Communit y
Studio profile

studio profile

AxisVFx: FAst, Furious

And design-led
The boutique UK VFX studio tells Tom May how it
meets the challenging demands of high-end TV work

Joe thoRnley
FX lead at axisVFX,
Joe previously worked
at Double Negative,
Framestore and
Jim Hensons
Creature Shop.

hen you think of Doctor Whos

special effects you think of Milk
whom we chat to this issue as
part of our big feature on television VFX
(page 40). But while Milk may be the
lead vendor on the sci-fi show, others
get brought in for the odd episode too.
And when axisVFX got the call, it was
for their dream job: a whole new type
of monster, one that had never been
seen on the series before.
The episode, Flatline, contains 66
VFX shots from the company which
has offices across Bristol and Glasgow
including a new multi-dimensional
villain called The Boneless.
The axisVFX crew not only created
the visual effects for the shots but also
conceived, designed and developed
the look and performance of this new
impossible creature.
Designing creatures from
another dimension was a
real creative high point
for us, says FX lead Joe
Thornley Heard excitedly. Its not
often that such a unique-looking,
effects-driven character design comes
up, and bringing it to fruition was
very rewarding.
Although the studio is just two years
old, the team that formed axisVFX has

Its fast and furious doing

TV work, but the team
here are great to work with
and very open to ideas

3D WorlD October 2015

RichaRd Scott
As managing director
of axisVFX, Richard
is responsible for the
direction and strategy
of the business,
development of
new business and
relationships, growth,
and building the best
possible team.


Launched as an offshoot
of Axis Animation, axisVFX
takes a design-led approach
to its TV and film work

worked on many other British TV shows

including Misfits, Call the Midwife,
Survivors and Wizards vs Aliens, as
well as feature films such as Shaun the
Sheep Movie.
The studio was originally launched
in July 2013, when Axis Animation
(whom we profiled back in issue 185)
teamed up with VFX supervisors Grant
Hewlett and Howard Jones to launch a
new boutique, high-end visual effects
facility. The ethos behind axisVFX was
(and remains) a focus on creativity, says
its managing director, Richard Scott.
Everyone at axisVFX
takes a very design-led
approach, borrowing from
our animation heritage where
references, design and concept work
are imperative parts of the process, he
explains. You always get to speak with
the creative and technical people who
are hands-on doing the work.
As an artist, Joe Thornley Heard
appreciates the approach. Working at
Axis is great, he says. Coming from a
film background its definitely fast and
furious doing TV work, but Im enjoying
having more variety in what I do and
having more input on the creative side.
The team here are great to work with
and very open to ideas.


oFFice brieFing

Producing high-end CGI and visual

effects for film and TV, axisVFX is owned
and run by craftspeople, whose goal is
to provide greater flexibility, accessibility
and creative control to filmmakers. It
believes in demystifying the process for
producers and directors, creating visual
effects that support the story, whilst
adding drama and scale to productions.
The axisVFX ethos is one of creative
intelligence, solving VFX production
challenges through clever design that
utilises available budget for maximum
effect. Between them, the axisVFX
supervisors, artists and production staff
have decades of experience and have
worked on a wide range of productions
including The Chronicles of Narnia: Prince
Caspian, Fred Claus, Hellboy, Doctor Who,
Charlie and the Chocolate Factory, Clash
of the Titans and the Harry Potter series.


Bristol and Glasgow
teaM Size
knoWn foR
Visual effects,
CGI, live action,
Richard Scott
GRant heWlett
VFX supervisor at axisVFX,
Grant has previously
worked at Aardman
Animations, The Senate
VFX, MPC and Cinesite,
among others.

The studio works on a Nuke and Houdini pipeline,

predominantly on Linux. I render in [Houdinis] Mantra
renderer, says Joe. Any compositing is done in Nuke.
Hardware-wise, Im running Linux on a 12-core workstation
with 64GB of RAM and a Nvidia GeForce [GPU].
For asset management, and to distribute jobs to its small
renderfarm, the studio uses a range of off-the-shelf tools.
Were transitioning from Shotgun to ftrack currently and
we use Deadline for all of our farm jobs, explains Joe.
Pretty much everything is glued together with Python in
one form or another.
The artists workflows vary according to the job in hand.
When developing a new look for something, its like
sketching, says Joe. Im trying new ideas and discarding
them but generally working in a linear fashion towards a
particular visual goal. When Im in shot production mode,
its all about multitasking and efficiency.
The company is always looking for talented and
committed artists, says Grant Hewlett currently,
Nuke compositors, Houdini FX artists and pipeline
developers in particular. But its not just about software
knowledge per se. While technical ability and experience
are very important, we value a dedicated and
passionate approach as highly, he says.
If that sounds like you, Grant urges you to consider
applying. The people I work with are the thing
that makes it for me, he says. We have really talented
people on our teams that amaze me on an almost daily basis
and our clients are great people too. I love the collaboration
and striving to produce the best work possible.
To learn more about axisVFX and see a range of its work,
FYi visit
3D WorlD October 2015


doctoR Who
Douglas Mackinnon, director of the Doctor Who episode
Flatline, asked axisVFX to create a two-dimensional alien race
called The Boneless. After an intensive design and concept exploration
stage, the studio used everything from motion capture, 3D scanning,
3D printing, particle simulation and complex image processing to
bring the terrifyingly weird creatures to life.

WizaRdS vS alienS
axisVFX has worked on all three seasons of this childrens fantasy
series. For season 3, FX Lead Joe Thornley Heard created a range of
new Houdini tools that could be used quickly and effectively by the
FX team in Bristol and Glasgow to manipulate the particle and fluid
simulations for complex magical effects.

Eve is a BBC sci-fi drama that follows Eve, a teenage girl who is the
result of an advanced robotics experiment. Among other effects,
axisVFX finalised Eves external control system and how she would
be able to remove some of her limbs. The team also designed and
animated a robotic spider character.

Communit y
Industry interview

The bathtub escape sequence blends the

miniature footage with a digital environment

Fabric for

industry interview

Tomorrows world
Alice Pattillo speaks to Rodeo FX about the secrets of
Tomorrowlands rocket-propelled visual effects

ArA KhAniKiAn
Ara is VFX supervisor and
head of 2D at Rodeo FX.
He has worked on over
50 feature films during
the course of his career,
including such highprofile projects as Now
You See Me and 300.

espite a disappointing performance at the box office,

Tomorrowland undeniably boasts some stellar visual
effects. Working in collaboration with lead effects
vendor Industrial Light & Magic, Rodeo FX produced over 50
VFX shots for Brad Birds science-fiction adventure, including
the shot shown at the top of the page, in which young
inventor Frank Walker played in adult form in the movie by
George Clooney learns to fly with a homemade jetpack.
Tomorrowland was another great collaboration
with ILM for Rodeo FX, says VFX supervisor Ara
Khanikian. It allowed us to tap into the depth and
breadth of talent at our studio. For example, its
neat that we can still build a model, film a practical explosion
in our studio, and then pass the footage on to another
department to integrate it into the digital sequence.
In the sequence in question, the central characters
narrowly escape their farmhouse before it explodes on
a rocket-propelled bathtub. We were given live plates
showing the escape pod landing in the lake and bobbing

The live footage of

the pod landing, before
the scene is extended
and CG added

3D WorlD October 2015


up to the surface. We had to extend

the camera in CG at the head of the
shots to create an environment that
didnt exist. We started with a 2D
concept, elevating the camera and
adding a farmhouse and land around
it, reveals Ara.
The team used Photoshop
for the environments, Houdini
for simulation, Softimage for 3D
elements, Flame and Nuke for
compositing, and SpeedTree to
create trees, blending the digital
effects with practical footage.

Its neat that we can still

film a practical explosion
in our studio to integrate
into the digital effects
To capture the house explosion
we built a maquette five feet wide
and four feet tall, then loaded it
with gunpowder and naphthalene
to create a realistic explosion, says
Ara. Two cameras recorded the
explosion against a greenscreen.
The biggest challenge, reveals
Ara, was getting the proper scale
and realism [for] the miniature and
making the explosion look realistic.
Were really happy with the way it
turned out.
Tomorrowland is out now. Visit
FYI Rodeo FX at

Fabric Software is
collaborating with
MPC to integrate
Fabric Engine, its
framework for
developing highperformance
VFX tools, with
RenderMan. The Fabric
for RenderMan plug-in
enables technical
artists to access
Pixars award-winning
renderer directly
within Fabric.
instAnt results
Users can now
create tools in Fabric
that interact with
RenderMans RIS
live renders. This
gives artists nearinstantaneous [visual
feedback] that allows
them to iterate more
freely and make better
creative decisions,
says MPC global head
of VFX operations,
Damien Fagnou.

inFinite PotentiAl
With RenderMan
and Fabric Engine
both free for noncommercial use,
anyone can get up
and running with the
technology. Fabric for
RenderMan provides
full binding to
RenderMans interface
and control APIs,
giving users access
to these functions in
Fabric and enabling
efficient workflows for
tasks like lighting and
shader authoring.

Communit y
Short Cuts

short cuts

sVa students take the

plunge into pixar territory

get published
email your short to

Taking the Plunge offers a sentimental splash of Disney with both its heartfelt
narrative and quirky animation. Elizabeth Ku-Herrero delves into its depths
From a bi-racial
background of Spanish
and Chinese, Elizabeth
was heavily influenced
by older brothers who
introduced her to Star
Wars, Newgrounds,
and South Park. She
worked as co-creator,
look development and
the creative director on
Taking the Plunge.


Thaddaeus Andreades,
Nicholas Manfredi,
Marie Raoult,
Elizabeth Ku-Herrero
Maya, ZBrush, Nuke,
Photoshop, Mudbox,
Arnold, ProTools and
production tiME
10 months

For their final year project,

this team of four students
at the School of Visual Arts,
New York City Thaddaeus
Andreades, Marie Raoult, Nicholas
Manfredi and Elizabeth Ku-Herrero
appealed to audiences softer
sides; creating an animation that is as
heartwarming as it is visually impressive.
Taking a wide range of inspiration,
from Pedro Conti and Victor Hugo
to Danny Williams, the team created
a world where Tangled meets How
to Train Your Dragon, complete with
vibrant, lovable sea creatures and a
good measure of Disney-style terror.
Story-wise, we found ourself pulling
from a lot of things we grew up with as
kids, such as The Lion King, Lord of the
Rings and The Great Mouse Detective,
says creative director, Elizabeth KuHerrero. This was especially true for
our chase scenes and to figure out how
to explore a vast environment while still
keeping our audiences engaged with
our characters.
What did you find was the most
challenging job during production?
We hit some roadblocks. We needed
to incorporate entire aspects that we
had absolutely no experience with. This
included creating an ocean and bubbles
that didnt just look good, but could be
rendered quickly so as to not bog down
the schools render farm since we were
already dealing with a nearly sevenminute long film.
What 3D software did you use?
We used predominantly Maya and
ZBrush for the 3D work. Most of the
texturing and painting work was done

We made a fishnado
(fish tornado) using a particle
system that world generate
the fish along a chosen
path We had over 500 fish!

in Photoshop and Mudbox. Rendering

was done in Arnold and compositing in
Nuke. The sound was mixed in ProTools,
and we put the edit together in Premier.
What was the most impressive
technical aspect of the project?
We made a fishnado (fish tornado)
using a particle system that would
generate the fish along a chosen path.
At the same time the fish had animation
on them that was carried on and copied
from one single fish! We went through
a lot of different methods to figure out
what would give us the most control
and be easy to do across multiple shots.
We had over 500 fish!
Did anything go wrong in production,
and what did you learn?
It was extremely difficult to be prepared
for the size of our project. During the
course of our entire School of Visual
Arts career we had been accustomed
to producing 30-60 second shorts.
Most of time when things get hectic

3D WorlD October 2015


and pile up its easier to slip into quick

hacks. We had growing pains in the
beginning, but it soon became obvious
we needed to work smart in order to
complete the animation.
For example, underwater layout
changes needed to be made within our
master layout scene. We had plenty of
shots with specific hero rocks, but by
having them all located within our layout
scene we could then unhide them for
specific shots, which made changes
consistent throughout shots.
We constantly had to remind
ourselves to be extremely strict with
correct naming conventions and to
reference scenes properly, and so
on. Of course, this meant a lot of
work had to be redone, but it was all
worth getting right in the beginning.
Otherwise, had we gotten caught up
in reassigning shaders across 70 shots
multiple times, our film wouldnt have
been possible.
Watch the full animation now

1 The ocean geometry

was created using
Mayas ocean shader,
this was then alembic
cached. We had to
keep the ocean opaque
due to render times.

2 Land was created

using matte paintings
placed in 3D space
in Nuke. The ocean
was lightened and
discolored using
masks in their UVs.

3 The bubbles were

comprised of stock
footage as well as
our own rendered
geometry to interact
with the regulator
and environment.

4 Arnolds skin
shader was used on
all our characters
and eyeballs. It kept
our shadows full
of colour and our
characters alive.

5 The fish needed

to be animated
separately and we
used alembic caches
so that they could
populate a path using
a particle system.

underwater Love

From a technical standpoint, a big unknown was the underwater

setting and how to make our characters look like they were in a
believable space. We studied a ton of underwater footage. The
key aspects we needed to push were depth and atmosphere, the
way the light pushes through the water and is cast on rocks, and
an abundance of floating particles. We ended up using a lot of
spot lights with volumetrics to build up depth. We made a looping
caustic texture as a gobo filter on a few lights to project caustics
on the rocks and moving volumetric rays. Depth of field was done
in Nuke using a depth pass rendered out of Arnold.

6 The sky was done

within 3D nuke by
importing our camera
and a sphere .obj;
we then painted one
matte painting for
noon and sunset.

7 The tear was made

using a combination
of rendered geo
for the drip and
controlling a UV
render pass to grade
the streak in Nuke.

3D WorlD October 2015


UK & EURopE offER

Read what matters to you when and where you want

Whether you want 3D World delivered to your door, device, or both each month, we
have three great options to choose from. Choose your subscription package today

print & Digital bunDle from 33 uK / from 63 europe

print from 27.50 uK / from 49.50 europe save up to
Digital from 22.50 uK / from 30 europe

when you buy

& RESt of
thE WoRlD
turn to page 79

subscribe today
TERMS AND CONDITIONS Prices and savings quoted are compared to buying full-priced UK print and digital issues. You will receive 13 issues in a year. If you are dissatisfied in any way you can write to us or call us to cancel your subscription at any time
and we will refund you for all un-mailed issues. Prices correct at point of print and subject to change. For full terms and conditions please visit: Offer ends 12 October 2015.

3D WoRlD October 2015





Anita Brown
Anita provides a 3D
visualisation service to
the interior design and
event design industries. Shes also a
Maxwell Render certified expert trainer.

Pietro Chiovaro
Pietro is an experienced
Italian 3D artist who
is currently working in
3D image creation and re-creation,
modelling realistic environments.

Your software queries solved

by our CG experts

Kieran McKay
Kieran works in the
games industry as a
professional 3D character
artist. Hes currently based in the UK
working with Sony/Guerrilla Games.

Rob Redman
Rob runs a 3D animation
and VFX studio, working
for clients ranging from
governments to rock stars. Hes also
an industry commentator and trainer.

Francis-Xavier is a
freelance character artist
and CG generalist based in Brighton.
He has worked in video games, media
and TV for over a decade.

Get in touch
EmAiL your quEstioNs to

Learn a time-saving
approach to create a
detailed character

3D WorlD October 2015


Step-by-Step character creation

one create the face and body
The body and face are created from
separate ZSpheres with symmetry. Once
you are happy with them both, DynaMesh
them together to make one seamless
body sculpt. The tools I use for the body
are mostly Move, ClayTubes, Clay, Dam
Standard and Pinch. These are used to
flesh out the shapes, add creases in tight
areas and correct the anatomy.

two make the hair

To make the Hair, start with individual
ZSpheres, and use the Snakehook and
Move tool to position them. When satisfied
with the look, use Dam Standard and
TrimDynamic brushes to sculpt each piece
individually. I also find it handy to use the
polish feature under the Deformation tab
and also ClayPolish to give a nice balance
between smooth and crisp edges.

three addinG weaponry

Next, make the chainsaws. Model them
as straight as possible making use of
symmetry and object-cloning for speed.
Then group the mechanically functioning
parts together so you can pose it easily.
I end up with four groups that I can
individually move around. Spend time
cleaning these and subdividing the objects
in 3ds Max and detailing in ZBrush.

four piecinG it all toGether

zbrush | KEyshot
How can I create a detailed
character in ZBrush?
Daniel Barrett, UK
Kieran replies

ZBrush is the perfect tool for making character

sculpts and is widely used in the film and game
industry. With an adapted workflow you can
use some speedy techniques to quickly make
a polished piece of artwork. In this training, I will give you
some structured advice and show you methods I use to
create my latest character.
Before you start working on a character similar to this,
the first thing you should do is plan how you will tackle
the creation process. I want to break this character up
into two parts; first the character itself and next the
mechanical parts/chainsaws. I make them in separate
scenes to prevent them from getting too big.
I start by sculpting the character in a regular T-pose.
The anatomy is a really important part of the model
overall, so I start sculpting using Symmetry until I
am happy with the proportions. I use the mask and
transpose method in ZBrush to get her in the perfect
position. You just mask the area you need to move and
then shift it into position with the Transpose tools. This
means you dont need to rig the character to pose it.

I use the Boolean function in ZBrush a lot

which helps me speed things up further.
The backpack is made in the same way as
the chainsaws. The bulls head is made in
3ds Max as a low-poly object and brought
into ZBrush for polishing and sharpening
of the edges. I put the backpack on the
girl and positioned the chainsaws easily
at the very end.
Once I am happy with her pose I add the final parts,
such as the hair and clothes. I use the Mask brush to
mask off areas for the clothes and clone them into
a separate tool. Its useful to use the inflate modifier
next to push it outwards ready for sculpting. This is
a really quick way to start working with a shape that
already fits around the characters body.
Next I make the chainsaw parts and the backpack
in 3ds Max and detail them in ZBrush. The Boolean
function in ZBrush enables you to punch crazy shapes
and holes into your mesh really quickly, and this is
how I make most of the tricky parts. I also use the
Polish, ClayPolish and TrimDynamic features to
smooth everything out in the end.
With this character in particular I end up with millions
and millions of polys in each of the two scenes that I
set up (the character sculpt and the mechanical parts/
chainsaws). When it comes to merging these two
together into one scene I use the Decimation Master in
ZBrush. I am finally able to get everything together and
match things up for a final composition. To finish off, I
export the entire model to KeyShot for rendering.

3D WorlD October 2015


expert tip

Use good reference

When Sculpting and creating
characters in ZBrush, the
most important thing is to
constantly look at reference.
Surround yourself with
images related to your
work and research things
like anatomy, form
and composition.

expert tip

Artist Q&A

Its all in the axis

When you first try this you
might find that your object is
intersecting the surface its stuck
to but thats to be expected. If you
dont want to use the constraints
distance setting you can hold [L]
with the model selected and use
the Move tool to change the
axis point to the bottom.

Fixing details to
a mesh is easy
with constraints

ciNEmA 4D
How can I make one
object stick to another?
Rowena Boater, Australia
Rob replies

Like so many tasks in Cinema 4D (and 3D in

general) there are many ways to accomplish
this and each has its own pros and cons. A
quick and easy method would be to make
clones of the object you want to be placed on to a
surface using a mograph cloner. You could use the
Transform settings to get the distance right and then
use a random effector to add variation, if needed.
This would work well for scenes where you dont need
ultimate control, as its fast to set up, but if you need
perfect placement there are better ways.
To show the power of this set up lets take a look at
a scene where the surface we are sticking things to
isnt flat. This works for anything, but to see it in action
we could choose rocks on a sea bed, trees on a hill
or mechanical details. For many mechanical models
you need details, such as nails, bolts, or rivets. From a
distance a bump map might suffice, but if the camera
gets close you will need to use geometry and it can be
a painful, laborious task to place them all by hand, often
needing to manipulate your viewport considerably to
see what youre doing.
This is where one of the character tags comes in very
useful. In the step-by-step example Ill show you how
to add and set up the character constraint tag to allow
you to move objects across the surface of another, in
this case, bolts across the body of a droid. The tool is
a versatile one and can be adapted to cater for most
models, but you need to be aware of one thing: once
youve placed your geometry, select the duplicates and
delete the tag, otherwise when you reopen the saved
scene they will have lost their positions.

Step-by Step using constraints

one collect your Geometry
First things first, you need both the
model(s) you want to stick and the object
you wish to stick them to. Ive made a
simple scene to get you started if you need
it, so open Constraint_start.c4d and you
will find the main body and three small
detail pieces you can use to start adding to
it. The axis of each is at the bottom.

two character menu

Lets start with Bolt A. Right-click it in the
Object Manager and go to Character tags/
Constraint. In its attributes you will see a
number of options. When activating them
a new tab will appear, with its individual
settings. Choose Clamp and move to the
Clamp tab, if it doesnt take you there

three clamp SettinGS

Now you need to tell it what you want
it to do, so drag the Body into the Target
field and change the To setting to Surface.
Seeing as our body object is spherical
and we want our bolts to face outwards
change the As setting to Normal, followed
by changing the Distance slider to 0cm.
The Bolt should now be stuck to the
surface of the body.

four copy and poSition

Now you have the tag set up you can copy
it to the other detail objects and start
positioning. You can use both X and Z axis
handles to position in any view and the
bolts will stick to the body. This is far easier
than trying to align them by hand. If you
want lots of bolts [Ctrl]-click and drag to
make duplicates. The Tag will copy over too.

3D WorlD October 2015


You can control and

improve the quality
of the reflections in
the Node editor

How do I create
realistic reflections?
Suzy Hamilton, US
Pietro replies

When I start a new project,

the first task to which I devote
myself is modelling. After this,
I begin to study the elements
that constitute it, in order to create the
materials. At this stage, I observe real
objects with the same characteristics, in
order to understand how to recreate the
material whilst remaining faithful to the
original. Its precisely at this time that we
begin to work with the reflexes.
Looking at the render above, youll
notice that not all the surfaces have
the same reflective capacity; the rims
of the car, as you can see, are more
reflective than the paint or compared
to the glass. This derives from the
fact that each material has a different
setting. In Blender there are three
shaders in particular that help us to
add reflectivity to a material, these
are Glossy, Anisotropic and Glass.
These three shaders when combined
with other shaders such as Diffuse or
Transparent, allow us to add a certain
level of reflection to each material we
are creating whether thats for a wall, a
table, a cup, an egg and so on.
Putting it into practice, if we want to
create the paintwork of the car, we have
to open the Node Editor panel, and
add two shaders: Diffuse and Glossy.

expert tip

these three shaders when combined with other

shaders such as Diffuse or transparent, allow us to
add a reflection to each material we are creating
After that, we have to connect them
to do this, simply add the Mix Shader
and subsequently connect them to the
Material Output node.
Now we can begin to set the node,
changing the values of roughness and the
colour of the Diffuse and Glossy shaders.
Its important to note that in the Glossy
shader, the lower the value of Roughness
the greater the reflective capacity of

3D WorlD October 2015


References for everything

When youre starting to create
a material, make sure you have
references, such as photos
taken from multiple angles or
video of this material. This will
help you a lot in the creation
of the material and
to understand its
reflective properties.

the material will be (for example, the

recommended value for a mirror is
0.005). Obviously this is a node basis;
in fact we can combine many shaders
and connect the texture to give a more
realistic effect to the material.
In the end we can highlight the
reflective capacities of the materials,
adding a Glare node in the Node Editor
panel for compositing.

Artist Q&A

expert tip

How can I make a stylised bust in ZBrush?
Justin Cumberlatch, UK
Francis-Xavier replies

The bust in question is of

Daenarys Targaryan from the
TV show Game of Thrones.
I saw a sketch online by Jeff
Stahl and I had to try to recreate it.
Now, because I am making a stylised
version of a well-known character

i use the clayBuildup brush to build up

volume and the smooth brush to get rid
of any irregularities in the mesh
and I am doing it from a single sketch
and not a turnaround, it is important
that I get as much reference as
possible. After grabbing as many
images of Khaleesi as I can online,

I jump into ZBrush. I start with a

DynaMesh sphere and block in the
forms and landmarks of the head with
the Move brush, making sure that I
work in as low a resolution as possible
before subdividing.
The good thing about working with
DynaMesh is that you dont have to
worry about topology. As soon as
you get a bit of stretching [Ctrl]+drag
anywhere on the document and your
model will re-mesh with even topology.
Once Ive got the major forms, I
use the ClayBuildup brush to build up
volume and the Smooth brush to get
rid of any irregularities in the mesh.
I use ZRemesher to automatically
retopologise my model and Im ready
to get into details. Subdivide the model
and use the Dam Standard brush to cut
in areas like the nasal labial fold and

Avoid a flat look

Eyes are always tricky to create
in any sculpt so it helps to
insert a couple of spheres into
the mesh early on. This way
youll be able to sculpt around
them and avoid the flat look
found sometimes in digital
sculpts. Look at the sculpt
from all angles to make
sure it looks good.
eyelid areas. I use the Inflate brush on
a low setting to get areas to overlap
and smooth them down if too severe.
The hair is created by drawing a mask
on the head and extracting, then I use
the Move brush with DynaMesh turned
on to get the desired shape. I use
Claytubes with Lazy Mouse on to draw
the hair strands. I use Dam Standard
to cut into the hair and use Claypolish
to sharpen the cuts. For the braids I
use an Insert brush, there are loads on
ZBrush central but you have to make
sure your model has no subdivisions,
so duplicate, delete higher and lower
and youre good to go. Once youve
drawn your braids on, you can delete
the duplicated mesh and use the
Move brush to position the braids
and sculpt the underlying hair mesh
to conform properly.

It is best to use a sphere as reference to model

the eyes otherwise they can look unnatural

3D WorlD October 2015


An interior styled
to resemble a
glossy cover shoot

expert tip

Add height!
Position objects in an interior
at varying heights and add
quirky items (in this instance
the single chair). This will
help to keep the image
visually stimulating; it will
assist in guiding the viewer
to areas of interest and will
prevent the interior from
looking too one

ANy | mAXwELL rENDEr | PhotoshoP

How can I create an interior scene that
resembles a glossy magazine cover shoot?
Michael Watcher, UK
Anita replies

Technical skill is paramount

when creating a photo-real
interior, but utilising your
creativity, understanding
what makes a visually captivating scene
and having knowledge of photography
techniques is also equally important.
This contemporary, industrial
inspired interior is achieved by using
a clear colour palette of inky blue,
with orange accents. The injections of
copper provides a rich contrast with
the dark backdrop and visually lifts the
overall design. Texture is incorporated
via the old wooden floor, the upcycled

coffee table and the fur throw, thereby

adding visual interest.
When styling an interior it is useful
to break it down into segments and
style each little area in its own right.
There are three distinct zones in this
interior: the coffee table, sofa and the
accessories placed below the art on
the wall. Whilst they all add pockets of
visual appeal, they dont compete for
attention or distract from the overall
composition of the image.
Placement of furniture is centred
around a tried and tested photography
method that is adopted by many
interiors publications: the camera

is aligned to the back wall and is

lower than eye level. All vertical and
horizontal lines in the scene have also
been aligned to the camera. By setting
up the scene in this way, the overall
image is much more harmonious to
the eye, the back wall acts as a canvas
and allows the objects in the scene to
take centre stage. Whats more, it also

By setting the scene like this, the back

wall acts as a canvas and allows the
objects in the scene to take centre stage
provides more options for cropping the
image at a later date, if desired.
Careful consideration has been given
to lighting. Generally, natural lighting is
the preference for professional interior
photoshoots, however, I want to soften
the overall look of this interior and
showcase the vintage Edison bulbs.
Therefore, all the emitter values in
Maxwell Render are set quite low so
as not to visually compete with the
strong environment lighting. There
are also hidden emitters in the ceiling
area, near the vintage bulbs and dotted
throughout the space to help balance
the lighting in the scene.
Some finishing post-processing is
undertaken in Photoshop to enhance
the sunlight coming from the window
for added drama.

Maxwell Render does a good

job at keeping the lighting in
the scene looking natural

3D WorlD October 2015


How Vancouver-based Artifex Studios
creates movie VFX on a TV schedule

3D WorlD October 2015


More than Marines

Artifexs work on Continuum
includes creating a 2077
set Vancouver


Artifex Studios
Vancouver, British
Columbia, Canada

adaM stern
Adam has worked in film
and visual effects since the
late 1980s, with credits that
include Pans Labyrinth,
The Mist, The Possession
and The Maze Runner.

urning present day Vancouver

into a thriving, believable future
of gleaming skyscrapers and
rain-soaked Blade Runner vistas
is all in a days work for Artifex
Studios. The Canadian VFX facility
has grown with the success of
Continuum, the SyFy series that
regularly plays with time travel,
ensuring every episode is awash with glorious
visions of the future.
Founded in 1997, and now with 25 full-time
staff, Artifex is a key player in developing
quality VFX shots for some well-known TV
series, and whose credits include Falling
Skies and Almost Human.

future vistas
This incredible scene
was created to a tight
budget at Artifex

Future perfect

But its Continuum that holds a special place

for the team at Artifex, which was brought
in at the very start of the shows conception
and given free reign to put its own stylistic
spin on many effects. Artifex worked closely
with showrunner Simon Barry to develop
the futuristic setting, from envisioning urban
streets to creating sleek sci-fi vehicles that
flit between the skyscrapers of an imagined
Vancouver skyline. It was an
absolute blast, says VFX supervisor
Adam Stern with a smile as he
remembers the early days.
In fact, as testified by many of the VFX
vendors interviewed in this issue, working in
TV ensures studios a certain ownership of the
creative output of a series that working in film
doesnt provide.
Continuum stars Rachel Nichols as Kiera
Cameron, a City Protective Services agent
from 2077 who is accidentally sent back in
3D WorlD October 2015

time while pursuing freedom fighters. Stuck

in the comparatively quaint 2012, she helps
the Vancouver Police Department track
down escaped foes using her advanced
technology and knowledge all while
attempting to get back home to her family.

Time travails

As the series has grown in scale, so too has

the shows complexity, introducing new
elements and enemies that Artifex has been
tasked with bringing to life for the small
screen. At the end of season three, the show
featured futuristic Time Marines, rendered
entirely in CG by Artifex. (You can see how
the studio created them over the page.)
Other than creating the new Time
Marines, recurring VFX work throughout
the series has included creating impressive

Continuum VFX

the biggest challenge with Continuum

is the schedule Keeping things
organised is always a benefit

ProCeSS: Modelling the tiMe Marine

Adam Stern shares the process for creating Continuums time-travelling troops

artist profiLe
Adam Stern

Making the future

Continuum is part of a growing
trend for film quality VFX on TV

Adam is Continuums
VFX Supervisor and the
owner of Artifex Studios.
Adam started as a
self-taught artist, and is
now branching out from
VFX into writing and
directing, with is short
film The Adept.

3D WorlD October 2015


ith its fourth and final season now in post production,

Continuum has been a tremendous project for everyone
here at Artifex. Continuum tells the story of Kiera
Cameron a cop from the year 2077, who along with
a group of freedom fighters is transported back in time to 2012. A
stranger in a strange land, Kiera must make allies, track down her
enemies, and fight to get back to her family and the future she left.
Artifex handles all the visual effects for the series, which
encompass everything from future cityscapes, tech and vehicles, to
Kieras supersuit, HUD, and various gadgets and weaponry. We have
been involved from the beginning, helping design and create the
world of the show. An episode of Continuum can include over 100
VFX shots, making it a challenging, but very rewarding project.
The Time Marines were introduced at the end of season three.
These fully-armored warriors are only seen briefly in the last episode,
but play a big role in the upcoming season. When seen in full armor
the Time Marines are full CG characters. Artifex has taken these
characters from concept through completion, and were looking
forward to their season four debut.
Our VFX pipeline is fairly straightforward: Maya and V-Ray,
ZBrush, Mudbox, Mari, Photoshop, Nuke, ftrack for shot tracking and
management, plus a number of tools and plugins. Heres a glimpse
into our workflow and construction of these formidable soldiers.

one ConCePT

Two BASe MoDeL

Three SeT uP uVS

Everything starts with a concept. After reading

the scripts and discussing the chara cters
with production, we get to work. The first
step is to produce a number of quick concepts
in Photoshop, which help us zero in on the
desired look. The Time Marines come from the
future, but we wanted an almost tribal feel to
them as if they were from different castes
and/or locations.

Artifex uses Maya as our primary 3D tool. This

is where the base model was created. Of course
we always aim to keep geometry as clean
as possible. In the case of the Time Marines,
they are humans, but encased within a rigid,
powered suit. When rigged and weighted theyll
need to feel like people, not robots but with
rigid/constrained movement. Some interesting
challenges, not immediately apparent.

To allow for as high-res textures as possible,

we lay out in multiple grids. Once we see fully
rendered versions of the character, well likely
need to make changes to various aspects, and
we want to be able to zero in on only the specific
areas required. This UV workflow allows for
up-resing textures as needed, while still being
able to keep our shader networks clean and
accessible for iterating.

Four DeTAILIng In ZBruSh



Although most modelling work on this asset

was done in Maya, ZBrush enables us to
create a number of additional high resolution
displacement passes bumps, dents, and
scratches to name a few. ZBrush was also
used during initial look development, as a fast
previewing tool for high subdivision work.

All texture look development was done in

Mari. Mari is a fantastic tool that enables
our artists to add and mix multiple layers to
get desired results, including working with
previously mentioned scratch/dent/bump
layers. We can dial in spec, dirt maps, and so
on, all with a high degree of interactivity.

Detail work continues in Mari, with

Photoshop entering the picture as well. Here
we have created several variations of the
Time Marine different wear maps, as well
as decals, battle damage and colourations.

SeVen MAyA/V-rAy

eIghT CoMP

V-Ray for Maya is a great tool for

photorealistic rendering. We can get
everything we need out of it, including a huge
amount of flexibility for compositors (see
Step 8). We find it especially good for hard
surface work. Artifex made the initial switch
to V-Ray while exploring the best rendering
options for CG cityscapes on the first season
of Continuum, and weve used it ever since.

The Time Marines are rendered using V-Ray 3.0.

We supply a large number of AOVs in loaded
EXRs, giving our compositors a high degree of
flexibility when putting together final shots. On
TV deadlines, the more that can be done in comp
the better sending shots back for re-rendering
isnt always an option. Our comp team can dial
in lighting, change the amount of surface detail,
and refine colours on a shot-by-shot basis.

3D WorlD October 2015


The rise of TV VFX

Author profile
Tom May
Having worked in
magazine journalism
for 22 years, Tom is
currently the content
manager for 3D World
and our website

3D WorlD October 2015


With movie talent increasingly flocking to TV
land, Tom May examines what effect this
seismic shift is having on the VFX industry

hen was the last time you

went to the cinema? And
when was the last time
you stayed in and bingewatched a TV drama? If the latter was
more recent, youre not alone
More and more of us are shifting our
downtime from the multiplex to the
front room, thanks to a combination
of larger, better quality TV sets, and
the accessibility of content. Where the
demand goes, the supply is following:
top Hollywood actors, directors and
producers are all flocking from the
movies to television, and the financiers
have followed in their wake, investing big
money in blockbuster shows.
Will Cohen of visual effects studio Milk,
the main vendor on Doctor Who and
Jonathan Strange & Mr Norrell, believes
theres a good reason for all this.
The movies find it very difficult
in 2015 to tell sophisticated
stories, he argues. They have
to appeal globally in order to

3D WorlD October 2015


make their money back. Ergo, there is a

dumbing down of the sophistication level,
or the characterisation.
So I think whats attracted actors,
directors and producers to television is
the ability to tell the kind of stories movie
studios like United Artists were making in
the late 1970s. If you want to make stuff
like that now, the format is television.
Nigel Hunt of Glowfrog, a Londonbased studio creating high-end effects
for clients including BBC, Channel 4
and HBO, agrees. The driving
factor is probably the
attractiveness of TV for high
end feature directors wanting a
little more control and freedom, he
suggests. The U.S. cable networks such
as Showtime and HBO have traditionally
been the powerhouses of high-end TV,
attracting film talent. Now, with the storm
surge of online networks, Netflix and
Amazon have emerged as major content
producers attracting even more film
talents, and larger production budgets.

The rise of TV VFX

digitAl doubles
Fully CG characters like Grodd
here were once unheard of in
mainstream TV shows. Now theyre
cropping up everywhere

the flash

So how do 3D artists and visual effects studios

get a slice of this action and should they
even want to? In this article, we assess how
television is transforming the 3D industry
before our very eyes

Encore VFXs Armen V. Kevorkian shares insights into

bringing DC comics fastest man alive to the small screen

Rising expectations

The first, and perhaps biggest change to

take place in recent years when it comes to
visual effects lies in audience expectations.
Essentially, in 2015, viewers want their TV
shows to look as good as their movies.
As VFX in film advances, it stands to
reason that it will follow in TV, says
Tanvir Hanif, visual effects supervisor
and 3D animator at 3sixtymedia, a
production company serving ITV, BBC and
independent clients throughout the UK. The
audience is so much more savvy and critical
now, so in TV we have to work harder to match
their expectations. As shows like Game of
Thrones, Vikings and Battlestar Galactica have
met that challenge, viewers have come to
expect the same on other shows, further
increasing the pressure to raise standards.
Theres just one drawback: film-like effects
need to be achieved without film-like budgets.
Sam Nicholson, CEO of Stargate Studios the
VFX facility behind The Walking Dead puts it
bluntly: The only difference
between TV and film these days is
time and money. So more needs to
be done by fewer people in less
time. Improvements in technology help, but it
still leaves a lot of pressure on artists.
Working in television is quite different from
working on feature films, continues Sam. To
survive in TV you must be very good but also
very fast. Particularly in series television, youre
generally prepping two shows, shooting one
and posting two.
The actual business of creating 3D effects
doesnt really change. As Sam puts it: The

five or more episodes at any given time,

says Armen. New villains are introduced
weekly so theres also a high volume of
complex shots being done constantly.
Grodd was a special one; hes one of the
most well-known nemeses in The Flash
universe. Time is always a challenge in
TV and, fortunately, we had a pretty
substantial heads-up that Grodd would be
making an appearance this season so we
were able to prepare accordingly.
Encore VFX use 3ds Max for 3D work
and Nuke for 2D work, as well as ZBrush.
Scans for the digital doubles were done at
Light Stage and Gentle Giant Studios. All
were essential to create the sense of super
speed seen in every episode.
Super speed is iconic to the character
and focal to the show, so we spent a lot
of time ahead of the pilot researching
what would look best. The comic books
provided a good base, and from there we
played around with movements that were
grounded in reality, says Armen, then
we had to cheat it a little bit to make
it look cool. Once we nailed the right
balance, there was definitely a sweet
eureka moment.
When conversation turns to the wider
growth in VFX in TV, Armen reflects: As
both artists and an audience, weve set
the bar high in what weve come to expect
from TV VFX. The work being created for
television (and beyond) is incredible and
quality shows are held to new standards.
See Encore VFXs scene breakdown
FYI here:

video scope
The rise of affordable
CG technology means
the scale of TV VFX
has grown dramatically

3D WorlD October 2015


chANgiNg plAces
The man in the green leotard
is none other than Jonathan
Strange, played by Bertie Carvel

Jonathan strange & Mr norrell

Bringing statues to life in fantasy series Jonathan Strange & Mr Norrell
wasnt as easy as youd think, reveals Will Cohen of Milk

artists are the same. The software

and the computers are the same. The
gap, then, is being met by innovative
approaches to organisation, time
management and productivity.
So now on our big shows like Heroes
Reborn and The Ten Commandments,
there are multiple simultaneous first units,
multiple directors and split location, explains
Sam. We have specifically engineered
Stargate Studios to excel in this challenging
production and post production environment

To survive in TV you must be very good

but also very fast. Youre prepping two
shows, shooting one and posting two
Sam Nicholson, CEO and founder, Stargate Studios
by networking and synchronising our ten
international facilities. Today, amazing visual
effects are possible in a fraction of the time
and a fraction of the cost of previous years,
which truly closes the gap.
Its a similar story across the industry.
In television, studios have tighter
schedules, due to the episodic
nature, and budgets that dont quite
stretch to reiterating a shot for the 40th or 50th
time, explains ftracks Ben Minall. These
constraints lead to approaches that have to be
ingenious, as the production doesnt have the
grunt power or luxury of time behind it.
Project management tool ftrack aims to help
stressed-out VFX studios meet these required
levels of ingenuity, he says. ftrack does away
with the need for huge Excel spreadsheets
that need to constantly be shared and reams
of emails cluttering up your inbox. It gives
everyone on the team one centralised location
3D WorlD October 2015


The rise of TV VFX

Worlds ApArt
The sheer scale of Marco Polos
epic environments posed a huge
technical challenge for Pixomondo

where they can see what needs to be done

that day, how it needs to be done, and when it
needs to be done by.
Other tools are available, such as Shotgun.
But whatever productivity software studios
use, its this kind of streamlined approach to
production thats crucial in the world of TV
VFX, as tough schedules mean theres scant
room for mistakes or multiple iterations.

MarCo Polo

Bringing Marco Polo to the screen was an epic

challenge, Pixomondos Christian Hermann explains

Variety and diversity

But if this all sounds like no fun at all, then

heres the good news: there are some definite
upsides to working on TV shows too.
First of all, theres a much greater level of
variety. For instance, working at Milk, says Will
Cohen, youll find yourself animating a snake
in Hercules one minute, for a month, for a

Since budgets are restricted, the

atmosphere becomes more inviting
for collaboration and creative solutions
Niklas Jacobson, VFX supervisor, ILP
handful of shots. And then youll be on Doctor
Who animating 20 shots in a handful of weeks.
I think its fun for artists to mix it up and have
that diversity. And to be able to move on to
another project very quickly, or to spend time
on really applying the final detail and polish.
Just as importantly, working on TV shows
can also involve much higher levels of creative
collaboration between directors and the
artists themselves.
The ambitions on a TV show may
be very high, says Nicklas Jacobson
of Swedish VFX house ILP, which has
worked on hit shows Crossbones
and Constantine. But since budgets are still
3D WorlD October 2015


sNAKes Alive
The CG snakes were sculpted
in ZBrush in great detail, with
modelling and rigging in Maya

CliCk TO PlAY VideO

its been reported that Marco polo

cost $9 million per episode was
there a bigger budget for this than
normal at your end?
A lot of money was spent on the practical
sets. For Pixomondo the budget was
comparable with other TV shows weve
worked on.
What was the biggest technical
challenge you faced?
Scale and population of the environments
proved to be the biggest challenge on
this show. As with any production of this
scale, the logistics of handling hundreds
of assets and terabytes of texture data
required the development of custom Maya
tools to provide artist-friendly layouts
that would produce predictable and
compositor-friendly render passes.
Models and textures were optimised
to reduce render times and special tools
were utilised to populate the city layouts
with vegetation and low-resolution
objects to add visual clutter. We used a
crowd simulation plugin for Maya called
Miarmy ( which
was employed in the creation of CG
extras and armies within the cities, and
which enabled quick swapping of walk/
run cycles and accessories to breathe life
into wide establishing shots with slow
camera moves.
Using Nukes 3D capabilities the matte
painting department was able to set up
a 360-degree landscape template, which
enabled quick swaps of the sky-dome
elements depending on the shots needs,
as well as ensuring consistency in the
layout of the background landscape
features across sequences.
For more on Pixomondo, visit its
FYI site at
3D WorlD October 2015


restricted and everyone is aware that its not a

feature film budget, the atmosphere becomes
more inviting for collaboration and open for
creative solutions. That makes you feel more
like a part of the production where you all work
towards the same goals.
Milks Nicolas Hernandez, CG supervisor
on the BBC historical fantasy series Jonathan
Strange & Mr Norrell, has had similar
experiences. On Jonathan Strange,
and on TV generally in fact, you
have direct access to the director,
he says. And usually you need to
have a proper partnership with the production
company to make it work, because of the
challenge of time and money. His boss, Will
Cohen, concurs. On Jonathan Strange we
had great collaboration a creative
partnership with the producer, the director
and two editors and that makes it all very
economic. The lines of communication are
very small.
If working on TV productions is good for
artists, its equally good for studios too. For
some its a good way to fill their downtime
between movie projects, while others have
chosen to specialise in TV completely. All

pAst glories
Pixomondo conducted detailed
historical research to ensure Marco
Polos environments were accurate

The rise of TV VFX

hit ANd MYth

Although a fantasy, Games
of Thrones look and feel
is grounded in reality

those we spoke to are expecting the demand

for TV VFX to grow in the future.
But while its the huge explosions and epic
set-pieces of shows that get the attention,
thats only the icing on the cake as far as VFX
work goes. Most of the work available is in
providing invisible effects things that the
viewer would never guess were done digitally
and not actually real.
Invisible effects are essential to our
business model at Stargate, says Sam
Nicholson. Theyre generally overlooked
by the viewing audience, but producers and

gaMe of thrones

Lead vendors discuss the challenges of bringing

George RR Martins fantasy classic to life

Our work is to support the narrative in

an invisible way. If you havent spotted
what were doing, weve done our job!
Tanvir Hanif, VFX supervisor, 3sixtymedia
directors absolutely realise the essential
nature of these non-spectacular effects to
their shows.
A good example of this is Stargate Studios
work on Grey's Anatomy. For the past nine
years using our Virtual Backlot process, Grey's
Anatomy has not had to travel to Seattle the
supposed location of the show, Nicholson
reveals. In fact, theyve been able to stay in
Los Angeles and weve brought Seattle to Los
Angeles for them. This greatly enhances the
creative possibilities for the writers of the show
while containing the cost for the producers.

Cant see the join?

Its a similar scenario for Tanvir Hanif at

3sixtymedia, where invisible effects form a
large chunk of day-to-day work. Our work
is often to support the narrative in a more
invisible way, he says. If you haven't spotted
3D WorlD October 2015


groWth spurt
The dragons have got
bigger and more detailed
with each new series

what were doing then we've done our job!

Tanvirs role is to go on set to advise and
supervise on how to shoot a particular scene
where an additional enhancement in post
production might be needed. It could be
to repair or paint out something thats not
needed, or a larger scale alteration to help
realise a key story moment in a programme.
These bigger scenarios are often undertaken
with a lot of pre-planning with resources put in
place to help realise the effect, but sometimes
you have very little to play with and have to
create most of the final image yourself.
One of their biggest recent productions,
for example, was ITV Studios Cilla, following
singer Cilla Blacks rise to fame in the 1960s.
One noteworthy sequence was the first
scene of the first episode, says Tanvir. This
introduces Cilla, who is queuing outside the
Cavern Club with supposedly hundreds of
people behind her. We used crowd replication
effects to seamlessly give the impression of
hundreds of extras rather than just the 70
we had on the day. We devised various
in-camera effects that we would later use in

ANiMAl MAgic
The interactions between
Daenerys and her dragon
have been particularly tricky

3D WorlD October 2015


The rise of TV VFX

tAKeover bid
This breathtaking pull-up
shot reveals the scale
of the zombie infestation

post production to create the final effect.

This emphasis on not making the CG elements
obvious is even the case on a show like Game
of Thrones, says Jrn Grohans, VFX
supervisor at Mackevision. Our
main goal is always to create
invisible effects. The imagination of
the audience should be triggered,
but everything should be reliable in the
respective story world. Visual effects are great
as long as they support the story.

the WalKIng DeaD

More and more digital effects are being used on hit zombie shot
The Walking Dead, reveals Sam Nicholson of Stargate Studios
Which shot(s) stand out in your mind
as the best example of your work?
Over the years, the shots which still
stick out in my mind are the ones we
created on the pilot with Gale Anne Hurd
and Frank Darabont. Specifically, torso
girl and the final pull up shot which
reveals the amazing scope of the zombie
infestation. Of course the iconic shot of
Rick riding down the deserted highway
which became the poster for Walking
Dead is a classic. These shots were
beautifully designed and executed which
resulted in the best possible result they
became icons for the show.

So where are we now?

For some industry veterans, working in

television VFX feels like coming full circle. Go
back to 1999, 2000, says Will Cohen. I think
theres 40 animated shots in Jurassic Park, the
first movie. 90 digital shots in Gladiator and
that was a big number.
Now movies are coming out with 1,500,
2,000, 3,000 shots. Maleficent had 3,000 shots;
every frame is a digital effects shot. So these
big VFX companies have evolved to have a
big machine and a big hierarchy. Decisions
are made, and theyre passed down through
that hierarchy. Theyve created these large
machines capable of the full pipeline solution.
And that works for them but TV is very, very

What kind of pipeline do you use?

Much of our software is off-the-shelf
but with lots of custom modifications,
transcoding and automation. After
Effects, Nuke, Maya and Premiere.
Weve developed a proprietary data
management system we call our Virtual
Operating System (VOS). VOS enables
us to seamlessly distribute, process,
render and deliver shots throughout our
international network of VFX studios.
Whats the best thing about working
on the Walking dead?
The overall body of work is amazing.
Maintaining the quality of the original,
inspired production year after year is a
real challenge, so were always looking for
new ways to keep it fresh.
For more on Stargate Studios visit

loNg WAlK AheAd

This iconic shot became synonymous
with AMC show The Walking Dead

3D WorlD October 2015


sKY tv
Even TV comedy now has moviestyle CG, like this space station
scene for Last Man on Earth

last Man on earth

different. In fact, its a bit like it was ten to

fifteen years ago when you were making 90
digital shots in Gladiator.
Having said that, even the lines between
television VFX and film VFX are now blurring.
Will gives an example: During the making
of the Battle of Waterloo shots for Jonathan
Strange, we asked the producer permission to
show Lionsgate, who were talking to us about

Oliver Taylor explains how Ingenuity Studios

recreated the ISS for a space comedy

Now movies are coming out with 1,500,

2,000, 3,000 shots. Maleficent had 3,000;
every frame is a digital effects shot
Will Cohen, CEO & executive producer, Milk
some crowd work in Insurgent. And Lionsgate
didnt realise it was TV not a film they asked
What movie was that from?. You wont be
surprised to hear they then hired us to do the
work on Insurgent.
To the ordinary consumer, that fine line is
even more invisible. We talk about film, we
talk about TV, adds Will, but for most people
nowadays, its just visual media. Whether its

reAl deAl
Ingenuity Studios rejected
a matte painted Earth in
favour of the real thing

3D WorlD October 2015


The rise of TV VFX

Wet ANd Wild

Shoot-outs with cannons
involved a lot of complex
fluid simulation work


Swedish studio ILP explain how it created some astonishing

naval battles for hit pirate adventure Crossbones
mportant Looking Pirates (ILP)
is a visual effects and digital
animation studio located in
central Stockholm. And with
a name like that its not surprising that it
got to work on a show like Crossbones,
the hit US adventure series about the life
of pirate Edward Blackbeard Teach.
Our main area was creating
digital ships and water,
explains CEO/VFX supervisor
Niklas Jacobson. There were
two real ships on set during principal
photography, but the show required a
multitude of variations of ships as well
as scenes of fleets of ships. There were
also shots of a stormy ocean, burning
ships and other complex scenes that
required extensive visual effects in order
to be realised.
One of the biggest challenges was to
efficiently tackle the workflow of handling
complex scenes containing high numbers
of different ships, full of digital crews,
cloth-simulated sails, banners, and water

on your mobile phone, in your living room or

at the cinema, people dont really differentiate
over the quality. People dont think: Ill forgive
that, because its only a TV show from the
BBC. Its got to stand up to be successful.
So as the world gets quicker, faster, in
terms of data, in terms of size, youre working
on the same camera, on the same resolution
were recording shots on Doctor Who in 6K
sometimes the techniques you learn
on both TV and movies you try and apply
where possible.
And heres something else thats changing:
just as the last few years have seen TV become
more like film, movies are now becoming
more like television. If you look at Avengers,
its just big TV in content terms, argues Will.
Its basically a giant TV series; you just have
to wait a year in between episodes. And for
movies, just like TV, schedules are shortening.
Since the last big financial crisis the challenge
of the entire world whether its visual effects
or any business has been to deliver more
for less. Thats what everyone wants in every
business in every industry, and the creative
industries are no different.
Consequently, in the future even VFX
studios specialising in TV will need to up

ship hAppeNs
The show required
the creation of
multiple CG vessels

3D WorlD October 2015


simulations. The pipeline and workflow

for keeping track of all these different
assets and how to quickly assemble
shots and cost-effectively turn over lit
and rendered versions required some
creative thinking, planning and pipeline
work, recalls Niklas.
A tricky shot hes particularly proud
of is the scene where The Reaver (pirate
ship) sails along side of The Petrel
(British ship) and they have a shoot out
with cannons. We made a full CG shot
with boats filled with a digital crew that
needed to cut seamlessly between two
live action shots. The challenge was
capturing the look of the scene but also
acting of the crew on board without
pulling the audience out of the story.
This is not an effects shot but
rather a great example of virtual
cinematography, he stresses. It could
very well have been shot on set, but
it was during editing that production
discovered that a shot like this would
really tie the sequence nicely together.
Other challenging shots were those
with heavy simulation work like water,
fire and smoke. They are tricky in
a more technical way, with complex
simulation setups and plenty of elements
to tie together nicely. Those shots are
more resource-demanding in terms of
computing power and time. They also add
another layer of complexity in artistry to
get it looking right.
To see more of ILPs work visit its
FYI site at

A bugs life
ILP needed to create
swarms of bugs
in a short period

CliCk TO PlAY VideO


ILP explains how it created a swarm of bugs

for hit horror show Constantine

specifics as well, but I loved the creative

freedom he trusted us with. I see him as
much as a director as a VFX supervisor.
He made us feel like an important part
of the production.
One particular challenge that comes
to mind was for the episode A Feast of
Fiends. Our mission was to create swarms
of bugs for approximately 40 shots in
a very short time period. We needed a
workflow that was swift and flexible
and allowed great creative control, yet
did not require our FX/simulation artists
to layout the shots. Hence we created a
custom Maya particle rig that allowed
our Lighting TDs to easily control shape,
path, speed and noise of the particles. We
would playblast little coloured spheres
for Kevin, which he would give feedback
on. Once the animation was blocked
and approved we switched the particles
to V-Ray proxies containing high-res
animation cycles.
To see more of ILPs work visit its
FYI site at

iNfest WiselY
ILP created a custom
Maya particle rig to
control shape and speed

3D WorlD October 2015


their game. Despite 15 years in the business,

Sam Nicholson recognises that applies to
him as much as anyone else.
To stay competitive in the ever-morecrowded field of global VFX, weve continually
re-invented Stargate Studios, opened
new markets and challenged our previous
assumptions of whats possible, he stresses.
We bring the latest technologies, creative
innovation, cost savings and real-world
problem-solving to all our projects, large or
small. Visual effects applied in this way should
save a production money, not cost more.
Some people working in the industry wont
want to hear any of this, of course. Theres
still an enormous amount of snobbery from
people who only work in film to the idea of
TV, says Milks Will Cohen. I think its a
generational thing. But look around you; look
at shows like Game of Thrones, what theyve
done in terms of production values. Its just
about being economic with your storytelling.
Realising how to make your budget and whats
being asked of you work is the future of our
industry. And you shouldnt resist it, you
should be excited about it!
For more VFX case studies and interviews
FYI visit


voTe now!

We invite you to join us for the CG Awards 2015, a celebration of

the most amazing art and technology that has been created over the
past year in the world of animation, computer hardware and VFX

The caTegories
The 3D World CG Awards 2015 will feature the following:

creaTive awards

Technology awards

Best VFX feature film scene

New application of the year

sponsored by

Software update of the year

VFX short film of the year

sponsored by

CG animated feature of the year

Plug-in of the year

Software innovation of the year
Hardware innovation of the year
3D print innovation of the year (new)

CG animated short of the year

Best CG commercial campaign

communiTy award

Arch-viz animation of the year

Live Event of the year

Arch-viz still of the year

3D World Hall of Fame

CG video game of the year: in-game

Vote now at

CG video game of the year: promotion

sponsored by


Practical tips and tutorials from pro
artists to improve your CG skills

54 create a star wars Matte painting

Double Negatives Saby Menyhei walks you through this Star Wars VII-inspired image

60 Master the art of fire

Expert advice for 3ds Max and FumeFX

For more on your

free downloads
& video training
turn to page 6

geT your
Youre three steps away from this
issues video training and files

64 how to create an epic VfX shot

Recreate a frame from Game of Thrones Battle of Blackwater with our six-page guide

1. Go to the website
Type this into your browsers address bar:
2. Find the Files You want
Search the list of free resources to find
the video and files you want.
3. download what You need
Click the Download buttons and your
files will save to your PC or Mac.

70 get starteD in houDini

74 Detail a character

Bemused by this powerful procedural tool? Read Digital-Tutors John Moncriefs starter tips Prepare this knight for cinematic production
3D WorlD October 2015


Star Wars matte painting

watch the
final scene
if you see the Play icon, use the link

ArTisT profile
Saby Menyhei
Saby graduated
from the Hungarian
University of Fine
Arts. He taught fine
arts/art history and
worked as a freelance
concept artist. He
now works in the
film industry as an
artist at Double
Negative. His credits
include Ant-Man,
Exodus and Godzilla.

Topics covered
Texture detail
Photo manipulation
Camera projection
3D WorlD October 2015



This iconic image of
the Star Destroyer is a
combination of 3D work
and 2D matte painting

Maya | Mental ray | PhotoshoP | nuke

Create a Star WarS

matte painting
Saby Menyhei shares his process for creating a powerful 3D
matte painting, inspired by Star Wars VII: The Force Awakens

n this tutorial we will try and

recreate that iconic image of the
derelict Star Destroyer from the
Star Wars VII: The Force Awakens
trailer, seen from another point of
view. Its a simple, yet powerful,
establishing shot and in this
tutorial I will show you how digital
matte painting can be used to
improve your 3D renders. The final
image will be a combination of 3D
work and 2D matte painting.
Matte painting is one of the
oldest tricks in the book, and is
still used to this day. While they
used traditional matte paintings
until the 1990s, nowadays digital
matte paintings are created
using photos, 3D renders and

hand painted elements. The

technique is still around as its very
efficient; you can create a large
environment that doesnt exist or
would be too expensive to build
quickly. And it doesnt have to be
a still image, if planned carefully,
you can take the painted image
back to the 3D space.
Sometimes, depending on
the actual shot, matte painting
can replace texturing/look
development/lighting and it can
also improve your render in a very
significant way.
We will start this tutorial with
setting up the scene in 3D and
rendering out a few very simple
passes in Maya. The rest of the

3D WorlD October 2015

work will be digital painting and

photo manipulation in Photoshop.
You will need to use different
skills, but most importantly,
you will have to use your eyes
sometimes that is more important
than any shortcuts or tricks.
Unless you want to model your
own ship, I recommend using the
insanely detailed Imperial Star
Destroyer 3D model by Ansel
Hsiao (,
who featured in issue 194 of 3D
World. Ive included my set up
files in this issues online Vault,
including my .fbx and .obj files for
Nuke and reference photos.
For all the assets you need go to


click to Play Video

Star Wars matte painting

Once you are happy with

the result, create a
new render camera and 1 SeTTINg THe SCeNe
2 eMBellISHINg THe SCeNe
The scene is very simple; we need a ground plane
You can add another wreck, for example a TIe
try to find another and the model of the Destroyer. For the layout, even Fighter in the foreground corresponds to the X-wing
a low resolution mesh will do. To match the position
from the original shot. You can create some simple
interesting view of the ship from the trailer, load a snapshot into the sand dunes using the Sculpt geometry tool (dont
viewport (View>Image Plane/>Import Image). Once
you are happy with the result, create a new render
camera and try to find another interesting view. The
other side of the Destroyer is just as iconic as what
weve seen in the trailer.

forget to uncheck the Show Wireframe option

under the Display options). Keep in mind that the
landscape eventually will be matte painted so it
doesnt need to be super detailed at this stage.




6 CreATe reNDer PASSeS (1)

ExpErt tip

Reference library
Taking your own reference
photos is great. So do so
whenever you can
building your own
texture and a matte
painting library is a
good idea!

look through your render camera. It might take a few

tries to find the preferred angle but once you have
found the best composition you can lock the position
of the camera to ensure it doesnt move. At this stage
you can still tweak the position of your assets and
when you think its done, its good to lock the layout
as well. While the landscape is more of a placeholder,
make sure that it gives you enough information about
bigger shapes, distance and so forth.

Because of the nature of this scene, instead of using

an IBl, you can get away with using mental rays
Physical Sun and Sky (you can find these under the
Indirect lighting tab). Just click the Create button,
and rotate the arrows that represent light direction.
Feel free to scale them up, the size of the arrows
doesnt change the intensity. Also, by changing
the Max Sample levels (under Quality) you will
get a nicer result.

render resoluTion
The resolution of your
matte painting should
be at least twice as big
as the final composite.
Render your passes at
4k or higher
Most of the textural detail will be matte painted,
but we might need UVs for one of the passes before
we start rendering. We will generate some random
noise that will be used as dirt, but we wont need
perfect UVs for this. So if you dont have UVs yet, the
simplest thing is to use the Create UVs>Automatic
UVs function. Its going to be far from perfect, but
might be just enough for what we need. You can
always use a UV checker map to test the result.
3D WorlD October 2015


We need at least four very simple passes for the

matte painting: Beauty, ID, Z-Depth and Dirt. For
the Beauty render, basic shaders will do (default
Maya/mental ray ones). The Destroyer is quite rusty
and dirty so dont make it too reflective. An ID pass
creates quick selections. Here, create a Surface
Shader with basic colours (red, green or blue), so you
can turn them to masks in Photoshop quickly.

7 CreATe reNDer PASSeS (2)

A Z-Depth pass is also handy to set the ships scale.

Youll need a new Surface Shader; set the Out Colour
to White and go to Create/Volume Primitives Cube
(your scene should be inside). Select Cube Fog Shader
and set Volume Fog Attributes to Black. render using
Maya. Finally, a simple Dirt pass will be mixed with
other textures (use 2D Textures/Noise as Colour). Play
with contrast and frequency of the patterns. render
out useful passes, like occlusion, for painting later.

8 gATHer reFereNCe IMAgeS

In a studio there would be hundreds of high-res

images to work from here were making our own.
You can find photos on the internet, but get the best
possible (high-res and no compression artefact). For
this matte painting we will need photos of sand dunes
and rocky grounds. For the Destroyer, you will have
to be creative and photos of collapsed buildings may
offer ideas. At this stage its ok to prepare some of
your elements: grading, keying and so forth.


turn id pAssEs into


onE sElEct chAnnEls


We will start matte painting by working on the sky and

far background. Create a new Photoshop document,
set the resolution (4k or higher) and I would suggest
painting in 16 bit (even though most reference photos
are 8 bit). In this example I boosted the contrast so
you see more clearly what is there, but in the original
shot the sky is quite milky and blown out. We dont
want to draw too much attention to the sky but keep
the focus on the sand dunes and the ship.


A few subtle clouds over the gradient might give us

enough detail, and while we dont know too much
about planet Jakku, maybe we can add a few planets
in the background much like the binary suns of
Tatooine. The sand dunes are partially coming from
the reference photos and but some parts are hand
painted. We dont have to worry about the middle
part because it will be covered by the ship.

Using masks will help you to work

faster. If you want to separate
some parts of the ship, open your
ID render in Photoshop. Go to
Channels and select one of the
channels (Red, Green or Blue).

two Applying sElEction

Now, click the Load Channel
As Selection button. You can
go back to your layers now and
apply the selection to either a
layer or a layer group by hitting
the Add Layer Mask button.

11 COlOUr COrreCT lAYerS

There are many ways to colour correct your layers.

The easiest is to use the Brightness/Contrast-Color
Balance-Hue/Saturation tools. However, I would
suggest using the Curves tool. Curves is very
powerful; you have full control over the tones and
you can colour correct at the same time by selecting
the red, green or Blue channels. each layer can
have its own modification layer without using
traditional masks.

12 MergINg lAYerS

Select your modification layer (Curves for example)

and hit [Alt]+[Ctrl]+[g]. It will be attached to the
layer beneath it (clipping mask). Dont merge your
layers until youre 100 per cent sure its done (nondestructive editing). If you want to paint the highlights
or the shadows separately create a new layer on top
of your main layer, create a clipping mask and go to
Blending Options>Underlying layer. Adjust the slider
to apply this layer to either light or darker values.

3D WorlD October 2015


thrEE copy sElEctions

You can also copy selections
from one document to another in
Photoshop by using Select>Load
Selection. Choose the right
source document and the

Star Wars matte painting

ExpErt tip

Check levels
Check your matte painting under
different exposure settings.
Increasing and decreasing gamma
might give you more information.
Alternatively, you can save out
an .exr and open it in Nuke.
You may get more information
on your values, especially
if working in 16bit.

13 CreATe THe BASe TeXTUre




Now we can start breathing life into the greyscale

render. The Destroyer has been there for 30 years
after the events of return of the Jedi, when it
crashed into the desert after a big battle. You
have to tell the story by painting the textures.
Blend the rendered dirt pass with other metal/rust
textures. The generic dirt texture can be revealed
through a mask.

We will need to go heavier on the damage. We

already started breaking up the edges, so lets break
them up even more! Applying a base texture was
only the start. Its a huge ship and it was severely
damaged during the battle. Some parts of the
Destroyer exploded you can paint craters from
scratch but you can also use photos of destroyed
buildings. You have to convey the idea that we can
see inside the Destroyer.

creATe ATmosphere
It might appear that you
are losing detail when
youre reducing the
contrast, however
it will improve your
matte painting

You can now start using your reference photo library.

Apply textures as if you were texturing a 3D asset;
try using different blending modes such as Overlay,
Multiply and so forth or extract detail from other
images (use Select>Color range and then copy and
paste). Be careful with the scale of the texture, as
its easy to make your model look tiny if the details
are too big. You can also create a group mask to
break up the edges.

Be creative and find the right places where you think

it needs to be damaged. Dont be afraid to change
the model! When you combine photos and digital
painting, pay attention to black values and the level
of detail. The new elements and the rest of the
ship have to blend seamlessly. Working up the
underside of the ship might be easier because its
more diffuse, while on the top you have to be careful
with light direction.


The Destroyer almost fills the frame but its actually

pretty far from the camera. To sell the idea that
its a distant object we have to add some aerial
perspective. It means you have to lift the blacks
(Curves is perfect for this) and make sure colours
shift towards blue. You can use your Z-Depth Pass
here. You can also turn this into a group mask all the
colour correction layers should be within this group.
Tweak your values until you match the original shot.
3D WorlD October 2015



19 ADD THe TIe FIgHTer



I would suggest starting with a quick collage using

photo references. Since you dont have to paint from
scratch, you can try out different options. Once you
have found the right elements, you have to meld
them. After some colour correction and digital
painting it will be unnoticeable that you have used
bits from different sources. Dont forget that the
layout is more of a guide, so you dont have to stick
to it 100 per cent. Pay attention to light direction.

There are a few little tricks that will help you to

achieve a more photorealistic look. Cg tends to be
a little too sharp and clean, so once youre done
texturing, a little light wrap will soften the sharp
edges. Unfortunately, there is no automatic light
wrap function in Photoshop, so just create a copy of
the background layers, then merge and blur them.
Manually apply it slightly around the edges. You can
also blur your layers from the midground a tiny bit.


To see your matte painting rendered through a

moving camera, use Nuke. export your geometry as
.obj files and the camera you rendered your passes
from as .fbx. Youll need to save out the main layers
of your matte painting (sky, sand dunes, Destroyer,
foreground, TIe) as .exr or .tif files with an alpha
channel. To create an alpha in Photoshop [Ctrl]+click
the icon of the layer, to create a selection. Hit Save
Selection As Channel under the Channels tab.

The TIe fighter in the foreground has to be matte

painted in a similar way as the crashed Destroyer.
The 3D model is quite basic and has no textures.
Use your TIe fighter reference photos to add detail.
You also have to destruct the ship and make it rusty/
dirty. The foreground around the ship needs to be
worked up too (to show the impact make a crater
and add debris around the ship). Blend it with with
the rest of landscape as much as possible.

go to Filter>Distort>lens Correction. You can add

lens distortion by setting the remove Distortion to
negative values. Play with the Chromatic aberration
values too this is where there is a failure of a lens
to focus all colours to the same convergence point
(Filter>Distort>lens Correction). You might also
want to add a little film grain (Filter/Noise/Add
Noise). If you plan 3D projection, skip this step you
will have to recreate these effects in Nuke.


load the images using the read node in Nuke,

premult them and then add a Project3D node.
Now plug in the .fbx camera (your old render
camera will become projection camera in Nuke).
The Project3D node goes into your .obj. Create a
Scene node and then a Scanline render (your new
moving camera will go into this). You can render
your shot using a Write node.

3D WorlD October 2015


Adding scAle
To add life to the image
and set the scale, place
a character, for example
a stormtrooper, close
to the wreck

Quick projEction
Precomp your layers
Alternatively you can set up a
separate projection for each
layer and you can precomp them
before you create the final shot.
The advantage is that the final
render will be quicker and you can
tweak your layers further in Nuke.
Please note that I didnt plan
to add a moving camera when
I started to work on the matte
painting, therefore the camera
movement was quite limited. For
more freedom you might need to
extend your digital matte painting
in Photoshop. Taking projections
from Nuke to Mari might be
another option.

Fire and smoke effects

The video
if you see the Play icon, use the link

3ds Max | FuMeFx

Master the art

of fire and sMoke
Vikrant J Dalal reveals how to create a man of fire
using 3ds Max and the FumeFX plug-in

n this tutorial we are going to

create a human character who
appears to be made of fire a
Fire-Man, as seen in super-hero
films, supernatural films and
game cinematics.
There are a number of plugins in 3ds Max that can create
awesome fire and smoke effects:
such as FumeFX, Afterburn,
Phoenix FD, to name just a few.
Many leading VFX and animation
studios use FumeFX software in
their pipeline, as this software
gives good output, is trustworthy
and user friendly.
In this training, Ill show you
how to create this Fire-Man effect
using 3ds Max and the FumeFX
plug-in. In the space provided

ArTisT profile
Vikrant J Dalal
Vikrant has eight
years experience in
the VFX industry and
has started his own
VFX studio called
Project01 Design
Studio, which provides
VFX, graphic design
and tutorial services.


here I cant teach you each and

every parameter of this software
and plug-in, as this is very vast
and would need a lot of time to go
through; but after completing this
tutorial, you will learn all you need
to know to be able to produce a
realistic Fire-Man character.
There are different techniques
to make this effect in FumeFX, for
example you could use Object
Source or Particle Source: we are
going to use Object Source.
Before you start working on this
kind of effect, you should have
a good knowledge of real-world
scale. Also you must know what
kind of fire and smoke effect you
want to create for your character.
Also, consider whether you want

only fire. You should prepare for

the work by asking yourself the
following questions: How big will
the fire be? What colour will the
fire be? Will fire emit from the
whole body or from only certain
parts of the body? Once you have
your character description you can
then start working on your shot.
Another extremely important
thing to do prior to starting
working on your shot is to watch
some references so you can
better understand the properties
of fire and smoke; their look and
behaviour. Dont be afraid to use
your creativity and tools; you will
soon find very good effects.
For all the assets you need go to

ExpErt tip

Change the fire and

smoke colour settings
Play with the colour
setting and curve editor. It
will definitely improve
the look of Fire-Man
and make the shot
more dynamic.

CliCk To PlaY video

1 Scene SeTuP

Topics covered
FumeFX Object Source
FumeFX Container
Setting FumeFX parameters
Creating Noise Maps

To create Fire-Man, we need to choose a character:

while you could use a static character, an animated
character will certainly improve the shot as the fire
and smoke will be subjected to dynamic motion,
and we will be able to see the interaction between
the fire, smoke and the character. I would suggest
that you make this character walk, at the very least.
Please refer to my video that is available to download
via this issues online Vault.
3D WorlD October 2015


2 creATe THe OBjecT SOurce

To create Fire-Man, we will need two things: FumeFX

Object Source and FumeFX container. With the
help of FumeFX Object Source, we will generate and
control the fire and smoke. To create FFX Object
Source, go to create Panel>Helpers and then click
on the drop-down menu and select FumeFX. click
on the Object Src button and generate it in the
viewport by left-clicking.

fire sTArTer
Master FumeFX for 3ds
Max to create a realistic
man made of fire

NoisE maps
Export a Noise Map
as an Image

CliCk To PlaY video

3 creATe A nOISe MAP

As we want to generate fire and smoke from certain

areas, we need to create a noise map. Select FumeFX
Object Source and go to the Modify panel, then
select the Fuel option and click on the Disable button,
and then select Source from Intensity, now you can
see the none button is highlighted. next, drag the
noise map onto this button. use the same procedure
to control the Temperature, but dont do this with
Smoke, as we want it to emit from the whole body.

4 creATe THe cOnTAIner

Go to create Panel>Geometry tab, click the dropdown menu and select FumeFX. Generate the
container in the viewport by left-clicking and dragging.
Modify its parameter. In General Parameters, set Size
100x230x125, Spacing 0.275. Set output and playback
range from 0 to 100. Go to Output Path and make a
new Folder on your storage drive. Save your cache file.
If happy with the general parameters, go to Obj/Src
adding FFX Object Source into the Object tab.

3D WorlD October 2015


Make sure you unwrap the

character before applying the
noise map, doing this mean that
you can export the noise map
as an image, and then this map
will stick on to the character
otherwise the map will stay in the
same position and the character
will move ahead, resulting in a
weird fire and smoke simulation.
To export the noise map as an
image, create the noise map, then
right-click on it and select Render
Map. Now set the Dimension and
Output path, and hit the Render
button. Doing this changes the
noise map into an image that
you can add to the FumeFX
Object Source.

Using noise mAps

As I want to generate fire
from a certain part of the
body, and smoke from the
whole body, I have created
a Noise Map. This means the
fire will emit from the white
colour areas only; not from
the black ones


6 renDerInG PArAMeTerS



9 renDerInG


In the Simulation section of the FumeFX uI, set

Quality to 10 and Iteration to 150. In the System
section, change Gravity to 0.5, Vorticity to 0.8, Velocity
Damping to 0.01, and Turbulence to 0.05. In the
Turbulence noise section, set Scale to 7.5, Frame
to 2.5 and Details to 5.0. X, Y should be set to none
in Blocking Sides, but set Z to -Z. now go to Fuel
Parameters. Make the Ignition Temperature 50.0, Burn
rate 10.0, increase expansion to 1.2. Press Simulation.

In the Fire tab, create a gradient colour in color. (To

match my gradient colour, please refer to the color
gradient code in my 3ds Max file, downloaded via the
Vault). Then go to Smoke color and set the Ambient
to r:0 G:0 B:0. In Smoke color create a gradient
colour: make the left-side colour r:45 G:45 B:45 and
the right-side colour r:122 G:122 B:122. The Opacity
should be set to 0.75, and both cast Shadows and
receive Shadows boxes should be checked.

UsE rEal FootagE

Experiment with footage
Weve created our Fire-Man
character on a solid background,
but you can enhance the scene by
shooting a live scene using a handy
cam, for example, and then try to
use your animated character in
that footage. Better still, you could
shoot footage of a real actor, then
model and animate to apply the
effects to this model. If you do this,
you will get a better idea of real
lighting, compositing and whats
more, you will have accomplished
a complete VFX shot: that shows
your skills of shooting, camera
tracking, FX and compositing. Also
remember to add extra elements
to merge the fire and smoke into
the real footage.

next, we need to place some lights. We are going to

use Target Spot light. For Target Spot, set the light
position to X:-285, Y:-300 and Z:400; and the Target
position to X:0, Y: -20 and Z:60. Turn on the Shadows
and go to the Shadow Parameters tab to turn on the
Atmosphere Shadows, with Multiplier set to 1.0 and
color set to White. Keep everything else default.

For the final rendering of fire and smoke, use the

Default Scanline renderer. To render the fire and
smoke elements separately, press [F10] and go to
render element Section, and then add FumeFX Fire
and FumeFX Smoke elements into this tab. Then set
the Frame range and Output resolution as per your
requirement. now save the render Output Path and
hit the render button; after some time, youll see
your fire and smoke element.
3D WorlD October 2015


After setting up the lights, select the FumeFX

container and go to the Illumination tab. Add your
lights into the lights tab by clicking on the Pick
lights button. Then make the following changes
in Multiple Scattering, because we want some fire
illumination on the smoke. Turn on the Multiple
Scattering, then increase the Maximum depth
to 7, keep the fire and smoke Strength default
and increase the Falloff to 11.0.

We can now composite our fire and smoke element.

So open After effects and load these layers into the
Timeline and assign some effects to it. Add some
color Balance, some Sharpen and Glow effects to
the fire until youre happy with the result. now select
the smoke layer and add some effects on it, like
Brightness and contrast, some Sharpen and so on,
until youre happy that your Fire-Man effect looks
just right.

Recreate a battle scene

ArTisT profile
Alex Farrell
Alex is a 3D artist at
The Neighbourhood,
working predominantly in
architectural visualisation.
He also produces many
of the scripts the studio
uses, and is heavily
involved in R&D.

3ds Max | Particle Flow | FuMeFx | PhotoshoP

how to create
an epic vfx shot
Alex Farrell demonstrates how to recreate one of the
most memorable battles from Game of Thrones

Topics covered

heres no shortage of
inspiring VFX shots in Game
of Thrones, and one of
the most iconic is the enormous
wildfire explosion at the Battle of
Blackwater. Recreating it might
seem a little ambitious, but were
going to employ a workflow that
breaks down the original explosion
to individual elements, and tackle
each one separately.
We will be using Particle Flow
(PFlow) and FumeFX to create our
pyrotechnic elements. The main
advantage of PFlow is that we can
use it to quickly create realistic,
organic shapes for our explosions.
We can also get realtime feedback
3D WorlD October 2015

on the influence of various forces

before performing any simulations.
As well as walking you through
the creation of the main body of
the explosion, I will also show you
how to produce the flying debris
thrown into the air.
I will cover how to effectively
manage the array of cache files
this shot will produce, and how a
limited number of simulations can
provide a huge variety of assets.
FumeFX can be overwhelming
for new users; changing certain
parameters can often have large
(and unexpected) effects on
our simulations. Well cover the
key areas, namely resolution,

the video
if you see the Play icon, use the link

turbulence, and the shaders used

for the fire and smoke. Remember
that directors and VFX studios use
smoke and mirrors to progress
the evolution of an explosion the
camera will change position and
focus mid-effect.
During the Battle of Blackwater,
theres an establishing shot to
show the scale of the explosion,
but the camera cuts to show the
reactions of the characters and
close-ups of the destruction.
Well recreate the explosion as
it continues to grow and sets the
fleet of ships at the harbour ablaze.
For all the assets you need go to

No shoT Too big

Large-scale VFX shots
dont need to be
intimidating, especially
when approached in a
modular, efficient way

ExpErt tip

Gradually add resolution

Avoid progressing immediately
from low-res drafts to high-res
final simulations the result can
differ immensely. Instead,
add resolution gradually,
tweaking major controls
such as Turbulence
along the way.

click to PlaY video

1 Scene DecOnSTRucTIOn

If we analyse the famous wildfire explosion in Game

Of Thrones, then we see it can be broken down
into several key elements. There is a central plume
which rises highest of all. Several smaller explosions
sit below this plume, destroying the fleet. An array
of ablaze debris scatters from the centre of the
explosion, leaving a smoke trail behind. Various
ships are being destroyed, both by natural fire
and green wildfire.

2 eXPlOSIVe ReFeRence

Both YouTube and Vimeo are excellent resources for

controlled explosions, with some videos specifically
being labelled as a resource for FX artists. Getty
Images also has a huge array of explosions filmed
in studio environments on high frame rate cameras.
even though our explosion lives solely on the realm
of fantasy, it is important to refer to how explosions
evolve in real-life, as it will give our final effect that
element of believability.

3D WorlD October 2015


Recreate a battle scene


FumEFx masking
makE EFFEctivE masks

Following on from the deconstruction in step one,

we should create bounding volumes using simple
primitives to block out our explosion. Apply the SeeThrough mode by pressing [Alt]+[X], so it is clear
that we are using these for scale. It may be hard to
visualise the actual size of the reference footage, so
import or create height markers. A cylinder with a
height of 1.7m and radius of 0.2m can be used as a
quick reference guide for the human scale.

4 SeT uP unITS

Go to customize>units Step>System units Setup,

and set the System unit Scale to Metres. Modelling
small details can be easier in centimetres, but thats
not appropriate here. Most measurements within
FumeFX will remain the same across the different
unit setups, but bear in mind certain values such as
Turbulence noise for your simulation and Radius
values for your PFlow systems will differ. Throughout
this tutorial my Display unit Scale will be set to Metres.

onE sEt alpha contribution

Right-click the object that sits
behind the FumeFX grid and
select VRay Properties. In the
Matte properties set the Alpha
Contribution value to -1.


two add multi mattE ElEmEnt

Add a MultiMatteElement to
your Render Passes and check
isMATID. On the material of
the object we excluded from
the alpha, set the Material ID
Channel to 1.

create a circle with a radius of 100m and add both

an edit Poly and uVW Map Modifier. Add a noise
Map to the Diffuse slot of a standard material, and
set Source to explicit Map channel. Set the Size to
0.05, and the High and low Thresholds to 0.7 and 0.3
respectively. Add a Gradient Ramp to the color #2
slot and set the Gradient Ramp to Radial. Apply
this material to the circle.

Go to Graph editors>Particle Flow. Right-click in the

Graph area and select new>Particle System>empty
Flow. Right-click the graph again and select
new>Birth event>Birth and connect these nodes
together. Set the Birth Start and Stop values to 1,
and Amount to 2500. Right-click event 001 and
Append a Position Object, Speed By Surface, and
Delete Operator. Set the Delete type to Particle Age,
life Span: 10, Variation: 5.



thrEE usE masks in post

Once rendered, the alpha will
provide a quick way to isolate
our FumeFX object in postproduction. And we still have a
mask for the object we excluded.

In the Position Object Operator add the circle we

created earlier to the list of emitter Objects, then
do the same for the Speed by Surface Operator.
Also in the Speed by Surface Operator check the
Speed by Material option, set Speed to 45m and
Variation to 5m. Particles will emit from the full
surface of the circle, but particles spawning from
the centre will move slightly quicker, making our
explosion more spherical.
3D WorlD October 2015


copy and paste event 001 and hook it up to the PF

Source node. Set its Birth Start and end to 40. In the
Speed by Surface Operator reduce the Speed to
35m, and raise the life Span in the Delete Operator
to 15. This will create an additional quantity of fuel
that ensures the fire continues to rage once the initial
shape has been defined.

ExpErt tip

Re-using simulations
FumeFX grids are
extremely versatile; they
can be moved, rotated
and scaled. Even after
simulating, you can alter
the Fire colour to change
the look and feel of
the explosion.





create a FumeFX grid that covers the bounding

volume defined in Step 3. Set the Width and length
to 250m, the Height to 100m, and the Spacing to
0.6m. In the Obj/Src tab, create a Particle Source
from the create From list button, then inside the
Parameters for this Particle Source add our PFlow
set-up using the Pick Object button. Set the
Radius value to 2.5m.

In the Simulation tab set Gravity to 0.65 and the

Turbulence to 0.02m. In the Turbulence noise
settings set the Scale to 75m and Detail to 5. In
the Fuel settings set Burn Rate to 1, expansion to
1.4, check Fire creates Smoke, and set the Smoke
Density to 1. The lower the value of the Burn Rate,
the longer it will take for the fire to extinguish. Set
the extra Detail Mode to Wavelet Turbulence.

13 RenDeRInG THe SMOKe

In the Smoke Render settings, click the colour swatch

for the Ambient color and set the RGB values to
11,11,11. use the same values for the colour swatch
and set the Opacity to 2.0. check both the cast
Shadows and Receive Shadows checkboxes at the
bottom of the panel. The Fire shader in particular
can greatly alter the look of the explosion, however
we will be rendering additional passes so that it can
be altered easily in post-production.

In Maxs Time configuration settings set the end

Time to 150. echo this update in the General tab
in FumeFX; both under the Output and Playback
settings. In the FumeFX Preferences check the Auto
Synchronize Paths setting. Back in the General tab
set the Default Path to a directory where you have
at least 120Gb of hard drive space. If you dont
have this much space available ignore the Wavelet
Turbulence setting in the next step.

In the Fire settings of the Render tab, set color to 0.6

and Opacity to 0.4. click the colour swatch to alter
the gradient of the shader. If it is set to a solid colour
mode, right-click the swatch and select Key Mode.
create a gradient with a sharp drop-off from bright
green to black. The Gradient file I used in my scene
can be found in the downloadable content that
accompanies this tutorial.


create a Target Spot light that points to the centre of

your explosion and alter the Hotshot/Beam value to
ensure that it is being fully illuminated. Position the
light so that it is pointing side-on to the explosion
from the cameras view. In the General Parameters
rollout, set Shadows to VRayShadow, and under
Shadow Parameters turn Atmosphere Shadows on.
In the Illumination tab of FumeFX use the Pick light
function to add the light to the list.

3D WorlD October 2015


optimising thE
Saving time and memory
Keep track of how much memory
3ds Max is using by right-clicking
the taskbar and clicking Start Task
Manager. The memory used will
be listed beside 3dsmax.exe in the
Processes tab. When rendering
with V-Ray it may look like your
render has stalled after loading
all the objects and caches. This
is due to the dialog box showing
the progress of the Illumination
Map calculation not always being
shown a step that can take
several minutes. Save this time
between renders by checking
Read/Write to Desk under the
Illumination tab in FumeFX. Also
save RAM and disk space when
exporting the post-processed
cache by excluding unnecessary
channels. Check Minimize Grid in
the Wavelet Turbulence Post (WTP)
tab in FumeFX to optimise your
export and reduce the file size.

Recreate a battle scene

thrEE-point lighting
Create a fake Normals pass
Being particle based, FumeFX
doesnt contribute to the Normals
pass in V-Ray, but we can set up
our own. Add three Spot Lights
like those created in Step 15;
change the colours of these to
red, green and blue. Arrange
them around the explosion; red
from left, green from behind,
blue from the top. Render this in
addition to your regular smoke
pass and bring it into Photoshop.
As with a regular Normals pass,
this can be used to re-light
our explosion.







click Start Default Simulation at the top of the

FumeFX dialogue to begin processing the explosion,
which can take several hours. If you feel it is taking
too long you can raise the Spacing value to reduce
the resolution of the grid. Once complete go to
Preview>Make Preview to render a quick preview
from your chosen viewport. If you had Wavelet
Turbulence turned on, select Sim. Mode>Wavelet
and repeat the process.

create a new empty Flow and Birth event in PFlow,

hooking the two together. Set the Birth Start, end,
and emit values to 1. Right-click and append a
Position Object, Speed by Surface, and Force node.
On the Position Object, add the Plane to the list of
emitters. Do the same with the Speed by Surface
node and set the speed to 125m. In the Forces
node, add both of the Forces.

In a new scene, create a circle in the Front viewport

and set the radius to 100m, set the Z transform to 0,
or level with your ground plane. In the Top viewport
create a Plane and position it at the right-most side
of the circle. Back in the Front viewport, rotate the
Plane -20 degrees in the Y axis. Add a Gravity Force
and a Wind Force. create a FumeFX grid that covers
the top half of the circle.

Append a Spawn node from the Test menu, set

Spawn Rate to By Travel Distance, and the Step
Size to 1m. Set Spawnable to 50, and Variation to
25. Set Inherited, Variation, and Divergence to 50.
create a new Spawn Test event and Append a
Delete node, set life Span to 1 and Variation to 2.
In the new Spawn node set Offspring to 3, Variation
to 15, and Divergence to 80. Then connect the
Spawn nodes together.

ExpErt tip

Adding 2D fire
Pyrotechnic VFX elements
are usually shot against
black backgrounds.
To remove this, set the
Blending Mode of these
elements to Screen.

Adjust the Strength values of the Wind and Gravity

Forces until the particles follow the circle. I used
a Gravity Strength of 0.03 and a Wind strength
of 0.01. As with the explosion, add the PFlow
system to the list of Sources within FumeFX. In the
Simulation tab set the Gravity to 0.1, the Turbulence
to 0.01, and the Turbulence noise scale to 0.5.
Increase the Smoke Opacity to 10 to make the
smoke appear thicker.
3D WorlD October 2015


create a Plane that will represent the ocean in

your scene. create a new VRayMaterial with a
black Diffuse colour, and set the Reflect colour
to RGB 144,144,144 with a Reflection Glossiness
value of .95. Add a noise map to the Bump slot
and change the Y Tiling to 0.4. In the noise
Parameters set the Size to 3.5, and the High and
low values to .945 and .07 respectively. Apply
this material to the ocean Plane.


illuminatE thE
smokE trails
21 RenDeRInG FuMeFX

Merge the FumeFX grid into your scene and use

the Output Preview window as a visual guide when
moving and rotating your explosion into position.
Having a long sequence of cache files means we
have lots of options and variations when it comes
to rendering multiple explosions. Open the Render
Setup window by pressing [F10] and uncheck enable
GI. Set the desired render size and hide everything
except the FumeFX grid, Spot light and Ocean.

22 RenDeR PASSeS

navigate to the elements tab and add

VRayWirecolor, FumeFX Fire, FumeFX Smoke, and
FusionWorks Z Depth to the list of Render elements.
The latter works in the same way as the VRayZDepth
pass does; simply use a Tape Helper to measure the
closest and furthest points of the FumeFX grid. Also
add the VRayRawReflection element to boost the
strength of the explosions reflection in the water.
click Render once all passes have been added.

onE mask thE dEbris

Move the render into a new group,
use the alpha channel to mask
both the group and the render.
Right-click the mask for the render
and select Apply Layer Mask.

two incrEasE brightnEss

23 FOReGROunD eleMenTS

To frame the shot I constructed a simple castle wall,

complete with torches and a wooden walkway. The
characters were made with Fuse by Mixamo. As the
wall uses a high-resolution displacement texture,
I rendered the entire foreground separately to the
rest of the scene. A VRay Plane light is used to create
green highlights on the wall and the characters this
helps convey the size and intensity of the explosion.


create a new Photoshop document, adding the
foreground, ocean, and explosion elements. When
comping in scenery from photography add a Black
and White Adjustment to the top of the layer stack.
Add a levels Adjustment as a clipping mask (press
[Alt]+left-click) to the photography, and adjust the
values until the greyscale levels match. When the
Black and White layer is switched off the brightness
levels of the photography closer match your render.

Create a solid black layer beneath

the group to improve visibility.
Add a Levels Adjustment to the
group, and move the midpoint
to the left.

thrEE add thE gradiEnt

Select the Gradient tool by
hitting [G]. Select the mask of
the Levels and click-drag from
the end point of the smoke trail
to the start.

ExpErt tip


create a new layer, fill it with black (press [D]+[Alt]

+backspace) and set it as a clipping mask to the
photography. change the Blending Mode from
normal to color Dodge and select the Brush tool by
pressing [B]. Set opacity to 10 per cent by pressing
[1] on the number pad. Sample the colours from the
brightest areas of the explosion ([Alt]+left-click) and
paint coloured highlights into the photography. use
the [ and ] keys to alter the brush size on the fly.

26 THe FInAl GRADe

create a curves adjustment layer then hit [ctrl]+[I] to

invert the mask. Replicate the pictured curve profile to
raise the mid-tones and the Brush to paint a blownout effect into the central areas of the explosion.
copy Merge by pressing [ctrl]+[Shift]+[c], and Paste
([ctrl]+[V]), and go to Filter>Blur>Gaussian Blur. Set
Radius to 5.0, click OK, and set the Blending Mode
to Screen. create a layer mask, invert by pressing
[ctrl]+[I] and paint a glowing effect into your scene.

3D WorlD October 2015


Changing perspectives
Create a flattened copy of your
.psd by pressing [Ctrl]+[A],
[Ctrl]+[Shift]+[C], [Ctrl+[V],
and select Edit>Transform>
Flip Horizontal. A change of
perspective can highlight
errors previously unseen.

Master Houdini

ArTisT profile
John Moncrief
John is the resident
Houdini and dynamics
tutor at Digital-Tutors.
His deep passion for
putting beautiful,
explosive effects on
screen is only matched
by his strong desire
to share his skills.

Houdini 14

GettinG Started
in Houdini
John Moncrief explains how to overcome the pitfalls
of starting out in a new software package

CliCk To PlAY video
CliCk To PlAY video

Topics covered
Getting started
Houdini desktop

earning a new 3D package

can be a daunting task.
There always seems to be
a never-ending black hole of
sliders, checkboxes, and rollouts.
Plus, each company wants you
to call their geometry something
different (is it cube or box, sphere
or ball, grid or plane?) But try to
remember the goal is the same;
to design a visually interesting
image or develop a tool that helps
another artist to do so.
The best way to learn Houdini is
to tinker about with it. Remember
you can save your work at any time
and delete the file later if you want
to, so get exploring and break
things. When they break try and
figure out what has happened.
Almost every node, pane, and
parameter in Houdini has an
accessible help file. Look for the
? icon in the top right corner of
3D WorlD October 2015

panes, and while hovering over

the name of a parameter press F1
to get specific help relating to
that parameter.
When youre tinkering, adjust
values aggressively. Set the
parameters values high and low

Almost every node, pane

and parameter in
Houdini has an easily
accessible help file
to see what its capable of doing,
what it controls, and what it can
be used for artistically.
However, its important when
exploring like this to always
bring a parameters value back
to something reasonable before


THe video
if you see the Play icon, use the link

moving on to see what the next

parameter does. That way you
are isolating the results of your
changes to just the parameter
youre currently adjusting. Once
you get a good grip on what
each parameter does, then you
can start blending them together
to get really creative.
A folder containing .hip files
to be opened in Houdini is
included in this issues online
Vault. Use these to follow along
in the interface of Houdini.
These files contain several more
tips and tricks that are clearly
noted within the interface of
Houdini using the built in Sticky
Notes tool.
There are several folders which
are empty, but are required for
Houdini to work properly.
For all the assets you need go to

Lets break something!

one set up geometry
Create a sphere by
holding [Ctrl] and clicking
the sphere icon on the left
shelf. In the Parameters
pane set the Y translation
of the sphere to 5. Next,
click the Rigid Bodies
tab on the right shelf
and click on RBD Object.
Select the sphere in the
viewport and press Enter.

set up your project

Houdini 14 now uses a projectbased system that automatically

constructs a set of folders for
every project you work on. To
set up a new project, go to
File>New Project, choose a
directory to house your project
folders and click Accept. If youre
working on multiple projects, go
to File>Set Project to pick an
existing project folder. Its a good
idea to have a default project
you can use just for tinkering,
then use a separate project for
actual production work.

two make breakabLe

Still in the Rigid Bodies tab,
click Make Breakable. Select
the sphere from the viewport
and press Enter. Click on the
Ground Plane icon, which
will give the sphere
something to collide with.
Hold down the spacebar
and press [A] to frame all
the objects in the viewport.

three simuLate
Set the timeline to playback
in real time by clicking the
second button from the
end of the timeline called
Real Time Toggle. Click
Play on the timeline and
watch your first Houdini
simulation! Remember to
dig around inside the nodes
that were created by the
shelf tools to learn more.

using sheLf tooLs

Houdinis shelf tools are very robust. With a few

clicks you can have a strong base for most types
of simulations. You might feel like everything
you do needs to be built from scratch, and while
theres probably not too many final shots created
with the click of one shelf button, its worth
noting that even in large studios many scene
setups begin with the shelf tools.

use the right desktop

Spheres, fluids, dynamics and particles are data. Each

area of the interface (pane) visualises or manipulates
this data. One pane might be the network view, one
the Material Palette. The pane arrangement is a
Houdini Desktop. If working in dynamics use
Technical desktop; for general tasks Build desktop.
Customise the Houdini UI and save as desktops.

3D WorlD October 2015


Master Houdini

the tab key is your friend

When searching for tools in Houdini you only need

a part of the tools name. Instead of having to go
to Tool Menu>Nurbs>Refine, all you have to do is
hit Tab then press [R] and youll see all the nodes
starting with R that are relevant to the context you
are currently in. So no matter where the tool is
located in any menu, you can easily find it with just
a couple of quick keystrokes.

understanding contexts

Houdini has different contexts that handle

different operations and each one has custom
tools. The SHOP context for example handles
Shading Operations; the DOP context handles
Dynamic Operations. The tools within these
contexts are called nodes. Houdini is very
context sensitive, which means that you only get
access to the nodes you can use in that context.
This is great because it prevents you from
creating a node in the wrong context.

Learn from orboLt

Example files are a great way to learn.

Use the Asset Browser pane to find
a link to the Orbolt online resource
( Assets are available
including character rigs, dynamic effects,
shaders, and models. Once you choose
an asset its automatically added to the
Asset Browser in Houdinis interface.

find training and heLp onLine

one start here

two training

three od[force]

Go to the Start Here

link in the Help menu
( The videos
bring you up to speed
on everything from
viewport basics to
building a digital asset.

Visit Digital-Tutors
to watch a variety of
video-based training
for all levels of Houdini
software users.

www.forums. built a
family of artists and
technicians. Users share
insights and tips with
newbies and veterans.
There are also example
files available.

3D WorlD October 2015


expLore the preferences

Go to Edit>Preferences and start exploring. Here

are a few things to adjust when youre first getting
started: in General User Interface, enable Color
Pane Headers and Show Images for Image Files.
Under Network Editor, Nodes and Trees, enable
Show Full Input and Output Names on VOP Nodes.
In Save Options change the Auto Save interval to
10 minutes or greater. It can be quite annoying if
the file is trying to save while youre simulating.

turn on auto save

drag equaLs controL

By default, the auto

save feature in Houdini
is turned on. But dont
be fooled: this setting
is session-dependent,
so youll have to do
it manually at the beginning of
each Houdini session. The option
is found under Edit>Auto Save.
Its a good idea to have this on
while youre building a scene, but
before you get ready to run a big
render or a complicated simulation,
turning this feature off can save
some system resources and help
speed up your workflow.

3D WorlD October 2015

When setting up
particle or fluid
simulations, always
add a Drag Force node
to your simulations.
Drag mimics the loss
of energy over time
by applying force and
torque to objects,
helping them resist
their current direction
of motion. Drag will
help you to control
your simulations while
maintaining a natural
looking motion.


Create a knight character

the video
if you see the Play icon, use the link

Maya | arnold | PhotoshoP | ndo | Mudbox | Zbrush

detail a character for

cinematic production
Carlos Cruz demonstrates different techniques to
achieve a realistic knight character

n this tutorial, I will show you the

process of making a character
for a cinematic production. The
first thing to do is to get all the
reference that we might need
for the character ready. For this
character, Ive chosen to take
concept art from a Korean artist
named Gpzang (Young june Choi)
and turn it into a highly detailed
CG character. You can see his
original artwork here:
I start by blocking out my
character in Maya: it is really
important here to use basic
shapes in order to achieve the
proportion in the fastest and
cleanest way possible.

ArTisT profile
Carlos Cruz
Carlos started as a
3D generalist and
compositor in the
advertising industry
in Chile, then he
moved to Finland
and is now working
as a freelance 3D
character artist.

In the second stage, I create a

kitbash: so I start making a library
of smaller details and pieces
that will be used in the model
many times.
In the third stage, I create
ornaments in the armour using
Alphas and the plug-in Projection
Master, and create the skin details
with Maps from the Surface Mimic
inside ZBrush. Then I move to the
UV mapping. It is really important
here to have a good display in
theUV layout to get the major
information of pixels in the UV tile.
For the texture stage, I use
nDo2 that helps me to get the
Ambient Occlusion from the

Normal Maps, and with Mudbox

I start painting the texture to
finalise in Photoshop. Then
using Arnold for Maya, I set up
the Shaders. Next I move to
ZBrush, again to make the pose
with the Transpose Master and
create the hair with FiberMesh.
At this stage I visualise the
final composition of the piece.
I finish off by setting up the
render in AOV passes in Arnold.
I then export as .exr files and
import into Photoshop for the
final composition. To help, Ive
included my textures and meshes
to download from the Vault.
For all the assets you need go to

ExpErt tip

Check the overall

From time to time take a
break to check the whole
model. Bring a screenshot
of the model into Photoshop
and start to change it very
roughly to see what can be
improved. Then take it back
to model further in your
3D package.

CliCk to Play video


Topics covered

Inside Maya, use a base female model to set the

basic proportion and then model each part of
the armour with basic shapes; this enables you
to change the proportion in an efficient way. Its
important to use basic shapes because you will
need to make many corrections at this stage to
adjust the model so that it matches the concept art
chosen. You also have to think about how all the
parts will fit together.
3D WorlD October 2015



Refine the primary shapes to complete the model.

Take your time here and pay attention to the details.
At this stage I often create a kitbash. So in the same
scene, next to the character, I start building basic
shapes that I will use often like belts, bolts, and
other parts of the armour that will be duplicated
many times. It is important to achieve a clean hard
surface model using the orthographic views; use
symmetry to help you.

A knighT in Armour
Carlos manages to create
a 3D character with great
colours, lighting, texture,
armour design and
dramatic pose

3D WorlD October 2015


Create a knight character

ExpErt tip

Saving time
Time is very important,
especially when youre in
production. By using base
meshes, textures, and
certain techniques you
can save yourself
hours of work.

Working with names & layers
It is important to be very
organised when character
modelling as there are so many
parts to think about and you have
to jump from Maya to ZBrush, and
then to Mudbox and Photoshop.
You will end up with gigas of
files, that include ZProject, Maya
scenes, .obj files and so on. So
you need to organise each file
with a proper name and add
a letter or a number to define
the version. Sometimes things
happen that mean your .obj files
have an incorrect vertex order,
but if you are organised, you will
easily detect the error and will
be able search for a solution. For
texturing, working with layers and
adjustment layers in Photoshop
is really important for the shader
development, because textures
tend not to work the first time;
I usually tweak a texture until
version eight or 10 to achieve the
shader that I want. Working in
this way means that if you want
to make a change in the texture,
you can just go to the specific
layer and edit only the adjustment
layer, or add a mask to make the
changes, while the main image
remains untouched.


Now search for the alphas to make the ornaments.

First, bring the alpha into Photoshop to eliminate the
pixelation; you can do this by adding a blur effect in
a very low value. Then bring the alpha into ZBrush
where you can use the Morph Target to store the
geometry, and then use the Projection Master to
place the Alpha in the correct position and scale to
project it. You can control how deep the details will
be projected using the Morph Target.


To model the details of the face youll need to bring

in a female skin displacement scan. I used one from Project the scans onto the
model (I used Mudbox) as a greyscale file (the face
must be with its UV map), then export the greyscale
file from Mudbox as .tga or .tiff and then import this
into ZBrush. In the Displacement tab, use the Morph
Target to store the geometry and then apply the
displacement into the model.


To make the UV maps I mainly use Maya or in

some cases UVlayout. In Maya 2015, the Unfold3D
plug-in was added to the core software and now
its cleaner, easier and more accurate to make UV
maps in Maya. I only use UVlayout when I have to
replicate a model many times, so I can transfer UVs
very easily, and also if I modelled a belt with the UV
layout, I would have an option to expand the UV
map in a perfect square.
3D WorlD October 2015


tips fOr

hOw tO COntrOl
thE diffErEnt Maps


For the armour, use the Normal maps extracted from

ZBrush, then inside nDo2 convert to generate an
Ambient Occlusion. Now you have a base map you
can import into Photoshop to start texturing. The
Ambient Occlusion map is a great guide as you have
all the ornaments in the 2D space. Paint the Base
diffuse map with textures from,
for instance. Keep everything in layers as were going
to create the different maps by tweaking these.

OnE sEt up thE Canvas

In Photoshop, I import the
Ambient Occlusion that was made
in nDo2. I tend to work in 4K for
the maps. I bring in basic textures
to make the diffuse map.

7 TexTURING The lACe

To make the lace effect in some parts of the model

use alpha images. I search the Internet for different
images of lace and take them into Photoshop to
build the alpha luckily most of the lace images are
photographed on a black background, so all I have
to do is use the Curve tool in Photoshop to give the
images greater contrast. Then make a tile of the
image in the UV map section of the lace model.


I tend to do the shading and texturing tasks in

parallel, because sometimes you have to go back to
modifying the texture to achieve the shading that
you want. The most important thing to start shading
is setting up the render. In Arnold I work using a
linear workflow, so first we set the Display drive
gamma to 1.0 and in Color Manager, in the Image
Color Profile, change it to linear sRGB. So now the
render will look physically correct.


Little by little, I start adding

layers of dirt, scratches and all
the textures that are needed to
achieve the diffuse map. The
diffuse map doesnt have any
specular or shadow information.

thrEE CrEatE OthEr Maps

Back in Photoshop, take each texture map and

create Glossiness, Specular and Bump maps. The
reason why I keep everything in layers is because
with the diffuse map, I make a group in Photoshop
and then duplicate it; I call this new group Specular
Map, for example. Then I always use Adjustment
layers to make changes in a non-destructive way in
the layer, so if I need to go back to a previous stage
it will be easy to do so.
3D WorlD October 2015

twO adding layErs

Creating the specular map

depends on the material youre
using; think where the highlights
are going to be and put in a
whiter colour in those areas.


Create a knight character

ExpErt tip

Lighting is key
A good lighting set-up
can change the mood of
your artwork, so try to
have some reference of
illumination and always
pay attention to how
shadows and reflection
work on the model.

thE COnCEpt is kEy

Learn how to become a
great storyteller
In every 3D picture we end up
telling a story, whether its a
character or an environment;
for that reason we need to be
able to bring more realism to a
3D image and always have to
be mindful of the concept. For
this project, Ive tried to create
a knight that has the elegance
and grace of a beautiful woman,
and for that reason I put a lot
of attention into the pose and
her attitude, hoping to convey
that idea. Making realistic metal
armour, or rendering a fabric, is
a technical process one where
we have to use good reference
and study reality but to achieve
an appealing character, we need
to think how the character will
interact in the scene that we are
creating. What is beauty and what
is grace are important questions
that I ask myself whenever I try to
make this type of character.





For posing, bring all the parts of the model into

ZBrush, then use the Transpose Master plug-in to
make the whole model one subtool. In order to
create a believable pose it is really important to have
references. Before you start modifying the topology,
try to move each part separately, and roughly pose
the character, then move the pieces until youre happy
with the overall shape, after that you can start to bend
and model the geometry to adjust the proportion.

For the final image, use the IPR option in Arnold.

This enables you to create a real-time render of the
model so you can see how the light works. I set up
Ai Skydome shader to illuminate the scene with a
hDR. At the beginning, use low sampling so that you
can see the result faster, and then start rotating and
moving the hDR until you feel that the character has
a good composition. At this stage, I often use many
hDRs to try different lighting setups.

For the hair use FiberMesh. Create a scalp by

duplicating the geometry of the face model, then
the hair can then be created on top of this. In this
instance, I created the hair after the pose because I
want to model and comb it according to the pose. In
the scalp model, make a couple of polygroups. Then
create the FiberMesh and start combing by isolating
the polygroups, this gives you a lot of control to
achieve the haircut you want.

Once happy with the render preview, set up a high

sampling, high-res image for the final output. For
more control in post, set up the AOV layers in Arnold;
this will enable you to have each different pass in
a different image, like diffuse, specular and so on.
When the render is done, take it into Photoshop. To
start compositing, import all passes and layers you
have on Screen mode, then start adding adjustment
layers to fine tune the final image.


Sometimes weird objects may appear to cut through

the different surfaces: if it is only a small detail
consisting of a few pixels, dont be afraid to make
corrections using a clone stamp. To break up the
clean look of the 3D, I often add filters to improve
the quality of the image, such as sharpen or a small
achromatic aberration (to distort the image slightly),
and lastly I add a little noise to break any banding
that may occur.
3D WorlD October 2015


US/CanaDa & RESt of WoRlD offER

Read what matters to you when and where you want

Whether you want 3D World delivered to your door, device, or both each month, we
have three great options to choose from. Choose your subscription package today

print & Digital bunDle from $79 us/canaDa / from $83.50 roW
print from $65.50 us/canaDa / from $70 roW save up to
Digital from $30 us/canaDa/roW

When you buy

foR Uk &

turn to page 28

subscribe today
TERMS AND CONDITIONS Prices and savings quoted are compared to buying full-priced UK print and digital issues. You will receive 13 issues in a year. If you are dissatisfied in any way you can write to us or call us to cancel your subscription at any time
and we will refund you for all un-mailed issues. Prices correct at point of print and subject to change. For full terms and conditions please visit: Offer ends 12 October 2015.

3D WoRlD October 2015


3d printing

3d maker
Exploring the best 3D print art,
technology and trends

get published
email your Cg art to

I can still remember seeing my

first 3D print. I just stood
there in awe

Visit the online Vault to download

extra process art for these projects:

3D WorlD October 2015


Each of Michaels
sculptures is based
on a specific tree,
and named after the
GPS coordinates of the
place where it grew


growth industry
Currently on display at 3D Printshow, Michael Winstones
organic forms, based on real trees, are a wonder to behold

Michael Winstones
work embraces a wide
range of digital media,
including, 2D, 3D, 4D,
time-based installation
and interactive video
animation. He has used
various 3D volume
rendering programs
for mathematical
visualisation of fractal
forms which can
be translated into
volumetric art.

o many artists, 3D printing

is a relatively new medium.
But not to Michael Winstone:
hes been using it to create art for
an astonishing two decades.
It was the mid-1990s
when I started to produce
my first 3D prints,
he recalls. I can still
remember seeing my first print on
a home-made kit. I just stood there
in awe. It was very basic compared
to todays standards, but it was a
revelation to me.
After that, Michael began to
use Z Corporation machines, then
later on, Stratasys systems. When
I started, I mainly used LightWave
[for 3D modelling], as at the time
both software and hardware were
extremely expensive, he adds.
When prices fell, my palette of
software expanded.
Today Michael is famed for
his sculptures, which examine
our own bodies within the context
of the human family structure and
its relationship with nature: in
particular, trees, and their own
external anatomy.

Each sculpture is based on a specific

tree, with the title being the GPS
coordinates of the place where that
tree grew or is growing.
Michael uses images of the trees
bark to texture and displace the
surface of the 3D model, so older
models preserve the striations and
growth forms of trees that no longer
exist, as well as current trees that
may be lost in the future.
The elements within the work
grow in the same way human,
animal, and plant tissues regenerate:
as substrates that are informed
and infused with environmental
forces which then grow to form
their various functions within the
natural world, he explains. They
are transformed into complex
intertwined sculptural structures.
Describing the process by which
he created his eight-metre-high
sculpture for Spains Expo Zaragoza,
shown at the top right of the page,
Michael comments: First I recorded
bark data from a white willow
tree growing by the river Ebro in
Zaragoza to form the foundation
of the sculpture. With this work, my

3D WorlD October 2015


aim, apart from [the final step of

creating a] bronze casting was to
achieve a complete digital process.
I modelled the sculpture in
LightWave, textured and displaced
it using ZBrush, then exported it
as an .stl file. The sculpture was
milled in Styrofoam with a 5-axis
CNC machine, then cast in bronze.
Finally it was placed at the site using
AutoCAD [to plan the installation].
As with other forms of art, there
is something pure and profound
when the process is the same from
conception to completion, he adds.
I try to keep my work completely
digital wherever possible.
Currently exhibiting his work at
3D Printshow California, Michael is
also looking forward to 3D Printshow
Dubai this November.
I have not exhibited in the Middle
East before so the work will be seen
by a new audience, he explains.
Hopefully architects and planners
will be able to see first-hand the
potential for 3D printing sculpture
in exterior urban landscapes.
Find more of the 3D Print Show,
Fyi visit

3d maker
Making a prosthetic hand

the moDel

This issue, we have a

3D model of a hand
to download, so that
you can have a go at
making your very own
3D prosthetic hand
3D WorlD October 2015



l mbe
ll ns ia pte
F o m a t o r Se
ai t tu sale 9
x n
n e 0 0, o


3D print column

3D print a prosthetic hanD

Aiman Akhtar explains how you can use 3D printing
to help those in need of a prosthetic limb

he news is full of stories about 3D printed

prosthesis helping people regain their
independence. So this month I decided
to take a departure from being the 3D print
designer, and instead be part of a 3D print
support crew for a good cause.
I knew I wanted to help but was honestly
terrified. Being an entertainment artist and
having no engineering background, I had no
idea where to begin articulating something
as complex as a human hand. Its also a
big responsibility to take on; what if I did
something wrong and the hand didnt work.
Far worse than my embarrassment is to give
someone else false hope.
Passionate volunteers have come together
to help each other in this cause all around
the world, such e-NABLE, a Google Plus
community which grew to become the
largest open source collection of 3D printed
prosthetics online. I answered a call for
assistance on 3D Hubs forum for printing a
prosthetic hand and met up with Michael
Repajic, a surgical attendant who is building
a non-profit, Humanus, Hands United Inc,
to supply 3D printed prosthetic hands to
the homeless population of Los Angeles.
He learned about 3D printing for prosthesis
while taking classes at University of Southern

California where the on campus 3D printing

club was helping print hands for children in
Haiti and Central America. Having volunteered
at a homeless shelter in Los Angeles and
seeing a need, Michael decided to bring the
project home to his local community.
Michael met with his first candidate and
chose the Cyborg Beast prosthetic arm, one
of the most popular designs available on Jorge Zuniga

Having no engineering background, I had no

idea where to begin articulating something
as complex as a human hand
and the team at Creighton University released
the Cyborg Beast model open source to the
world, licensed under the Creative Commons
Attribution Non-Commercial license and have
improved it steadily over the past two years
with feedback from the e-NABLE community.
Michael worked with a local 3D printing
service provider, Lyle Thompson, owner of, who volunteered to set up
the proper configuration/scale for the hand
and printed out all the parts. Michael then

Artist profile
Aiman Akhtar
Character artist
Aiman enjoys creating
personal, digital
artworks and continues
to work on various
freelance projects.
Hes also a beta tester
for Adobes 3D print
tools and continues
to experiment with
the technology.

ways to get involveD

Improve and customise the designs and then upload them for everyone to share
Many of us early adopters of 3D
printing technology are entertainment
artists, yet if we own a 3D printer,
thats a good enough reason to get
involved. Here are a couple of online
communities to check out making
real strides with 3D printed prosthesis.
enAbling the future
Jon Schull is a Research Scientist at
the Rochester Institute of Technology
and founding member of the e-NABLE
community. He helped setup the
Enable Community Foundation, a
non-profit which supports the global
community of volunteers using
emerging technologies to develop
innovative solutions for underserved
populations. The e-NABLE
community is pioneering this

limbitless solutions
Founded by Alberto Manero,
Limbitless Solutions is a company
based out of Orlando, Florida
that uses 3D printing to create
personalised bionic limbs for children
with disabilities. They partner with
local companies, communities
and corporations to sponsor each
prosthetic, believing that no family
should have to pay for their child to
receive an arm. What distinguishes
Limbitless is their push for bionic
prosthesis of myoelectric design
which can be controlled through
muscle sensors, servos and an
Arduino microchip. Evan Kuester,
lead modeller/designer and Jackson
DiMaria of the research and
development team have worked

movement by designing, fabricating

and distributing open-source
3D-printed prosthetics for people
who need them, and giving them
away for free online. Jon describes
the volunteers as a diverse bunch,
made up of tinkerers, engineers,
3D print enthusiasts, occupational
therapists, university professors,
designers, parents, families, artists,
students, teachers and people who
just want to make a difference. An
interesting trend he has noticed in 3D
printed prosthetics is that US children
love Superhero Hands, girls like pink
and purple, while in the developing
world, more organic-looking hands
are desirable. Visit the website and
explore their forums to get involved
and matched with someone in need
of a hand near you.

3D WorlD October 2015


on the production of several of the

Limbitless prosthetic arms and are
now in development on bionic legs.
In fact, the majority of the team is
made up of University of Central
Florida students and recent graduates.
As they grow, Limbitless is actively
recruiting at other universities across
the US and welcomes volunteers,
support and artist submissions for
prosthesis and sleeve designs on
their website.
Several more prosthetic hands are
available online at model sharing sites
such as Thingiverse and the NIH 3D
Print exchange and many individuals
have even crowd funded their own
prosthesis such as in the case of the
Cyborg Beast,
and the Open Hand Project, at

3d maker
Making a prosthetic hand

The 3D printed hand

was tested: a small
crowd gathered to
experience this little
marvel of engineering
and technology

cleaned up the print and added the strings

and straps to prepare for a fitting.
The Midnight Mission homeless shelter
board of directors and staff generously lent
us a conference room to test fit the 3D printed
hand and analyse its effectiveness. A small
crowd gathered to experience this little marvel
of engineering and technology. The point of
using 3D printing for prosthesis was obvious
to everyone. Traditional prosthetics can cost
several thousand dollars while mechanical 3D
printed hands, such as the Cyborg Beast, start
around $50.
Its hard to put a price on restoring peoples
independence yet lowering the cost increases
the accessibility for far more people. The first
fitting went surprisingly well, and Michael
left with several ideas on how to improve the
design for the recipients individual needs.

Quicker iterations allow for better

customisation to each individuals
needs and improved functionality
Cost is usually the big selling point for using
3D printing technology for prosthesis, yet
through my first-hand experience, the biggest
advantage is that it allows regular people such
as ourselves to get more involved in our local
communities. Quicker iterations also allow
for better customisation to each individuals
needs and therefore improved functionality.
The technology is also self-sustainable as
seen in the example of project Daniel by Not
Impossible Labs, where the team was able to
set up a 3D printing lab in war-torn Sudan and
within a month, the locals were able to print a
hand per week! Whether making a difference
in far off lands or in our own communities,
the ultimate goal of technology is to bring
us closer together and whats better than
figuratively and literally lending a hand?
For assets and 3D print-ready models go to
3D WorlD October 2015


Design project: create a 3D printeD prosthetic hanD

The model files can be downloaded from

one FinD a canDiDate

two print the hanD

three aDD FunctionalitY

We met our candidate at the Midnight Mission

homeless shelter in LA where Michael Repajic
volunteered frequently. Finding a candidate
is not hard as many people are eager to try,
however you must assess his/her individual
needs. This is not a one size fits all project as
peoples injuries and limbs come in all shapes
and sizes. The Cyborg Beast is customisable to
fit people who have lost their fingers or most of
their hand but still retain some wrist movement.
Our candidate had a remaining wrist range of
about 1.5inches. Measure the knuckle area and
scale all .stl files to fit accordingly.

Before you begin to print, remember to

carefully choose the correct arm assembly; the
right or left hand models vary and are both
included in the assembly file. Print the scaled
.stl files on a quality printer with choice of
filament (ABS, Flex series, etc). Our hand was
printed by Lyle on a Makerbot as the cost is
far cheaper with a FDM type printer and the
generated plastic parts are more durable than
nylon or resin. Once printed, sand down any
sharp points and drill into the holes to clear
the parts of any burr and make sure everything
connects smoothly.

Attach all screws/bolts on the fingers and

connect the various parts. Detailed instructions
on this step are available on the Enabling the
Future website and the accompanying pdf in the
Cyborg Beast zip file. After parts are articulated
it is time to add functionality by inserting
flexible string and creating knots in order to
hold the flaccid fingers in an upright position;
these strings act similarly to human tendons. Dip
the fingers in plasti-dip or dip n grip material
to increase gripping ability of fingers. Theres no
sense in having perfectly smooth fingers as they
will be put to daily use for all sorts of tasks.

Four prep For Fitting

Five aDjust tension

six Fit the hanD to the recipient

Attach medical grade foam to the printed

palm and gauntlet using super glue. The foam
prevents irritation from rubbing and chafing of
the skin and provides support and comfort. The
Cyborg Beast has built in connectors on the side,
through which Velcro straps can be looped in
order to make the hand secure and wearable.
The .pdf recommends using multiple Velcro
straps on each side of the print, but we found
that looping a single, longer Velcro strap
through both connectors enables the user to
put it on without help and strap it tightly to the
arm with ease.

The printed hand needs to be wrapped tightly

to the candidates arm for it to function at
maximum efficiency. Even so, the default tension
of the tendon strings may not be enough to
fully contract the fingers. The Cyborg Beast has
a tensioner bricks which can be tightened by
rotating the screw causing the string to become
tighter and allowing for less movement to
create a smooth and firm finger contraction/
grip. In addition, our candidate had not used
the necessary wrist muscles in over 30 years
so additional physical therapy was required to
master the device.

The first fitting went surprisingly well as the

recipient tested out his range of motion and
took charge in describing what was working
and where he was having trouble. Theres not
always an Aha moment and our hand required
additional customisation to fit the needs of our
candidate. Fitting a 3D printed hand is relatively
quick, yet optimising it for efficient daily use
requires testing and iterating. The most exciting
aspect of this technology is that new prosthetic
hand designs and innovations are being made
every day and shared amongst the worldwide
community and we can all join in.

3D WorlD October 2015






ne w- look


19 aUgUsT 2015




Theory, research and reviews plus

industry insights from todays experts
Best in class
Awarded to products that
excel in their class.

highly commended
Awarded to great products that
achieve a high standard.

88 nuke techniques

Speed up scripts with performance timers

90 Building a new virtual reality

VR isnt just for gamers. Meet Nurulize, creators of a new breed of virtual environments

97 review: photoscan

A robust, affordable photogrammetry tool

Get published
email your CG art to

Visit the online Vault to download

extra process art for these projects:

96 review: renda pw-e7f

Overclockers enters the workstation market

3D WorlD October 2015

97 review: unfold3d 9

The UV mapping tool gets a strong update


100 my inspiration

This issue: Saizen Medias Davide Bianca

Nuke techniques

nuke Techniques

Performance timers in Nuke

In the second of his guides to core Nuke techniques, Josh Parks explains
how to create a custom node to find bottlenecks in your scripts

Author profile
Josh Parks
Josh is a compositor
at MPC, as well as
a part-time lecturer
at the University Of

ast month, we went over

some simple tricks for
optimising your workflow
in Nuke. This month, were going
to be covering a more advanced
technique, so because of this,
Im dedicating a whole tutorial
to the subject.
As your scripts become more
complex, you will sometimes
find that they start to run more
slowly. By using the knowledge
you gained in the last tutorial and
by searching through postings
on technical forums, you can
generally guess whats making
them lag, but you still wont have
proof. Often the only solution is to
go through your script line by line
disabling nodes until it becomes
more responsive.
The solution is to use Nukes
performance timers. When active,
these turn the most intensive
nodes in the graph red, and state

under each node the time it took

to calculate. This information is
incredibly useful when running
large scripts as it enables you to
intelligently turn off nodes that
take longer to cache, then turn
them back on before you render,

Performance timers enable you

to intelligently turn off nodes
that take longer to cache
meaning that you arent slowed
down when hardware resources
are limited.
This mode can be activated
by calling a Python function.
However, Im going to show you
how to turn the command into a
button within a node. This will
enable you to turn the mode on
and off in a more efficient way.

An alternative would be to add

it to your toolbar, but this would
mean editing Nukes or file, which companies can
sometimes get funny about.
For that reason, Ive found
a button within a node the
most useful way to implement
performance timers. You can
simply store the code in a Google
Doc, allowing you to access it at
any time, then copy and paste the
node into the script you want to
evaluate, and delete it when done.
It also gives you the option to set it
as a tool for easier use.
In the walkthrough on the
right, Im going to show you how
to create your own node with the
button in place. Ive only tested
the script in Nuke 7 and 8, so I
cant guarantee it works in older
versions of the software.
For all the assets you need go to

Warning colours

Make custom nodes colourful!

I personally go for a bright colour
when creating a custom node so
that its obvious if Ive left the
node in a script. Its important to
cover yourself in case someone
else has to jump into the script
and work out what is going
on. Making the node a bright
colour will make it immediately
noticeable, avoiding an artist
stumbling across a node that isnt
connected to anything else.

Nukes performance
timers show the nodes
that take longest to
calculate in red

3D WorlD October 2015



o w x t l mb
ll ne Ia pte
F o h s t o r 9 Se
s u le
j o k e t n sa
n u 0 0,




Process: ImPlement PerFormance tImers

Create a custom node with a button to analyse your script

one make a noop node and name It

two add a PYthon button

The first thing to do is create a NoOp node. This is

going to be our starting point for creating the button.
A NoOp node is exactly what its name suggests: a node
that performs no operations, providing a blank canvas
on which to create your own custom node. To bring a
NoOp into your script, hit [Shift] and type NoOp, then
hit [Enter]. Enter a name and colour for the node.

To create a button that will run a custom Python

script, bring up the NoOps properties by doubleclicking it. Right-click anywhere in the Properties
panel (not in the Label panel) and select Manage
User Knobs. This panel allows you to add predefined
types of knobs to your node. Select Add Knob, then
Python Script Button.


click To PlaY Video

Import nuke
##Defining our definition
def performanceTimers():
## If nuke is using performance timers then stop them
if nuke.usingPerformanceTimers():
## if nuke is not running performance timers turn them on
## now run our function

three Paste In the PYthon scrIPt

Heres the Python script you need to copy into the Python Script Button. It can
be found in the downloadable content for this tutorial on the online Vault.
First, label the button: in the Label field, type Activate/Deactivate
performance timers. In the Name field, type timersButton. This is the
name that is called when referring to the button in a TCL expression. Type the
Python script into the Script field. Finally, enter the tooltip text you want to
appear when you hover the cursor over the button in the Tooltip field.

The Video
if you see the Play icon, use the link

Four toolset the node

FIve Push thInGs Further

In order to quickly bring your new node back into

Nuke, you can use Nukes toolsets. This will enable
you to search for it when you hit [Tab], which will
save you from having to create the node from
scratch every time. To do this, select the node and
click the spanner icon on the left-hand toolbar,
then select Create.

If you want to push this idea one step further, you can
add the node into your own toolbar within Nuke. The
Foundry has a great tutorial on its YouTube Channel
( This covers
everything on you need to know to make your own
toolbar. Creating your own toolbar is a more efficient
way of integrating your own tools within Nuke.

3D WorlD October 2015


Industry interview

Industry IntervIew

Building a new
virtual reality
VR isnt just for gamers. Kerrie Hughes meets Nurulize,
creators of a new breed of virtual environments

Author profile
Kerrie Hughes
Kerrie is content
manager at 3D Worlds
leading industry
website Creative Bloq.
She is a former staff
writer for 3D World
and writes regularly for
ImagineFX magazine.

A concept illustration for

Desert Home, Nurulizes VR
environment. The rendered
version is fully photorealistic,
as you can see overleaf
3D WorlD October 2015


irtual reality is not a

new concept. The idea
of creating immersive
computer-generated environments
for users to explore and, in many
cases, interact with, first came
to the publics attention in the
late 1980s. However, at the time,
the expectation far exceeded
the experience. Despite the huge
popular interest, the technology
of the time was unable to deliver,

and for all but a select few, VR

remained the stuff of dreams.
Now, more than a quarter of a
century on, things are changing.
With the arrival of a new wave
of high-quality, low-cost virtual
reality devices like the Oculus
Rift, it seems that the promise of
VR will finally be realised.
The potential for this new
technology to transform gaming
the market for which the Rift was

3D WorlD October 2015


originally developed is obvious.

But how will virtual reality affect
other parts of the CG industry?
To find out, we spoke to two
people with a keen awareness of
VRs capabilities: Scott Metzger
and Philip Lunn, the industry
veterans behind new software
developer Nurulize, which aims
to bring virtual reality to fields as
diverse as product visualisation
and visual effects.

Industry interview

The NuReality Desert

Home: a VR environment
Nurulize describes as a
virtual vacation space

SCott Metzger
Scott is a VFX industry
veteran, with a CV
that includes both
Digital Domain and
Method Studios. He is
now Nurulizes CCO.

One way in which virtual reality

differs from traditional forms of
CG is the ability of the viewer to
interact fully with the image.
When they have the ability to
look anywhere, people
tend to look very closely
at things, says Scott
Metzger. They also
want to look behind things. They
want to look in areas that, in a
video game, they never get the
opportunity to see.
Unfortunately, the things
that interest the viewer in a VR
scene may not be the ones that
its creators expect. You may be
building this beautifully animated
character, but when you put
someone in virtual reality, they
[may] choose to look at something
else in the corner, says Scott.
This means that artists working
in VR need to invest the same
amount of effort in every part of a
scene, not just the key areas.

Its a completely different way

of thinking about 3D space, says
Scott. You almost have to create
[a VR environment] in a way
that you build real spaces. If you
build a room, youre not going to
leave a portion of it unfinished,
because you know someone is
going to want to put something
there. Because the user has the
freedom to explore any element
up close, you have to finish every
detail. When you start losing the
resolution, or losing the detail, you
lose the immersive effect

Points, not polygons

Phil and I decided that the best

way to do environment capture
more quickly is to throw out the
time-consuming parts of the
process, Scott explains. The
current pipeline is immense
you have to take all the scan
data and convert it to polygons,
then resurface it and line up the

VR is a new way of thinking about 3D.

You almost have to work in the same
way that you build real spaces

To enable artists to create

the detail necessary for VR
environments, Nurulize has begun
developing new tools to help
automate the process.
One of these is Atom View,
a new real-time engine for
visualising point cloud data from
3D scans of real environments.

3D WorlD October 2015


cameras, and its just a big hassle.

We thought, Why dont we just deal
directly with scan data as points?
As Scott points out, working in
this way does not just save time:
it also preserves the detail of
the original scan data. Theres
so much beauty to a raw point
cloud scan that gets lost when

Nurulize plans to allow users

to customise the Desert
Home by installing their own
downloadable content

InsIde the nureALIty desert hOMe

How Nurulizes photorealistic environment points to the future of product visualisation

converting to polygons, he says.

Polygons destroy small details.
Of course, the reason that artists
traditionally convert point data to
polygons is that the raw data looks
noisy when rendered: an issue that
Scott is well aware of.
What makes Atom View so
different from any other point
cloud software is that its going
to look really, really good to the
point where you dont have to go
and do the whole [manual cleanup] workflow, he says.
Weve also been working with
Chaos Group to get V-Ray renders
into Atom View. If you want to
quickly get content into VR [by
reusing] assets, you can bake out
your renders as object space points
and integrate them very quickly.

real-time realism

Atom View is not the only project

that Nurulize has been working
on. Drawing on Scotts production

When Nurulize founders Philip Lunn

and Scott Metzger started working
on their NuReality Desert Home
project, they set out to deliver the
future of virtual reality, handcrafting
a living space that would redefine
the way people work, play and
connect with others.
A photorealistic recreation of
nearly six acres of the real world,
Desert Home was reconstructed
using lidar scan data and HDR
imagery captured using 36.3
megapixel Nikon D810 cameras.
I think the Desert Home is a
great contribution to the thinking
surrounding VR, in [terms of] the
amount of detail and craftsmanship

3D WorlD October 2015


that has gone into the house,

says Scott Metzger The beauty of
this capture [process] is that every
photograph has five exposures, so
its 30 stops of light information. All
that information is really important
for realism, particularly reflections.
Designed as a zen-like space
that users can personalise with
downloadable content, Nurulize plans
to allow manufacturers to use Desert
Home to preview new products.
One big [market] we can
use this technology for is product
visualisation, says Scott. We want
to create something that you can
visualise products from. Say you
want to check out a new laptop: the

one thing you cant get from looking

at a 2D image is proper depth, right?
Thats one of the big advantages of
the house: it gives you that sense
of depth. You have this sandbox
for experiences in a comfortable
[setting] that isnt just an empty
space without any soul.
Scott believes that this is a key
advantage of VR over augmented
reality, in which viewers are shown
3D representations of objects within
the real world. When showing
commercial projects inside VR, you
get to control the environment,
he points out. We get to dictate
what you see, [so] with VR, its
complete immersion.

Industry interview

OPenInG the MArket

How Oculus kickstarted the VR revolution

Until recently, VR was a technology that only a select
few were able to use. That changed in 2013, when
Oculus VR began shipping the first dev kits for the Rift,
its pioneering low-cost, gamer-friendly VR headset.
Oculus did something amazing in the beginning,
which [was to allow] anyone to acquire a development
kit and start experimenting, says Scott Metzger.
Most technology is held onto in secret for a very
a long time, allowing only select developers have
access. Oculus was pretty much handing out gold to
anyone with enough faith in VR.
The schematics for the original device have since
been made open source, and a consumer version is
due early in 2016. The success of the Rift has also
prompted electronics giants like Sony, Samsung and
HTC to begin developing their own hardware.
VR isnt new, points out Scott. But with great
design and amazing passion, Oculus inspired multiple
industries to get excited over computer graphics again
and to start dreaming of different possibilities.

Nurulize believes that VR

is poised to transform the
way we experience the
world, from movies to
product marketing

experience, the duo have also

been developing NuReality Desert
Home: a VR experience that
replicates a real, mid-century
modern home in the desert with an
unprecedented level of real-time
philip lunn
The plan with the house is
really to make a piece of software
Philip founded
that we enjoy and use ourselves
rendering software
every day, Scott says. We were
developer Bunkspeed,
talking about successful VR apps
acquired by Dassault
and how we consider it a success
Systemes in 2013. He
if you can stay inside the app for a is now Nurulizes CEO.
long period of time.
That isnt as simple as it might
sound. The experience of doing
something in virtual reality that
doesnt correspond to the real
world for example, running
through a room in VR while
remaining sat down in reality
sends mixed messages to the
viewers brain that can rapidly
induce nausea.
Being able to get through that
and properly spend time in VR is a

3D WorlD October 2015

really important thing, says Scott.

You need to create something
compelling enough that [the
viewers] want to stay inside it.
With Desert Home, Nurulize
believes that it has done just that.
Ultimately, its a
vacation home where
you can go and hang
out, says Philip Lunn.
Its a nice place to get away [to]
and spend time socialising, and
making music, and playing.
But as well as making Desert
Home a place to play, Nurulize
aims to make it a place to work.
You can read about how the duo
plans to use the space as a platform
for product visualisation in the
boxout on the previous page.

the next big thing?

Aside from product marketing,

what does the future hold for
virtual reality? Scott believes
that mainstream entertainment
is a key target market. I cant


go to [stereoscopic] 3D movies
and get excited any more, he
says. When you compare 3D to
VR, the difference in immersion
is laughable. If you want my
ticket money, it wont come from
[showing] 3D movies on a 2D
screen in a theater.
Other growth sectors
may include teaching and
telecommunications. Scott
believes that VR will usher in
a new era of entertainment,
communication and education.

Going mainstream

Phillip also believes we are set

to spend much more of our lives
inside VR. A good percentage of
people who do creative work on a
computer will put on their headset
and then just be in there eight
hours a day, he says.
For that to happen, the
experience of working in virtual
reality will have to be a pleasant
one. Hardware like Oculuss new

Details matter in VR. Unlike

a game or movie, viewers
are free to go anywhere,
inspecting models up close

Rift headset has a key role to play

here. Its very lightweight, its
high-resolution, and it feels very
comfortable on your head, says
Phillip. You could have it on for
quite a while. When you come out
into the real world you may find
that you actually liked it better in
VR and want to go back in.

things on their head, Phillip says.

I went out on a camping trip with
five other guys recently, and none
of them had really heard about VR.
They sort of knew that Facebook
bought something in that area
[Facebook acquired Rift developer
Oculus VR in 2014] but had no
idea what VR was. So I pulled out

Portable devices like Google Cardboard are whats

going to spread VR. Theyre going to be the gateway
drug into full-immersion virtual reality
Another important thing about
the next version of the Rift is that
it will be a proper commercial
product: not just an early-access
development kit available to a
small number of hardcore fans.
Unfortunately, most of the
world still hasnt experienced VR
there are very, very few people
who have actually put one of these

this little portable headset and

they tried it, and I watched their
reactions. Everyone was blown
away. And thats it you have to
experience it to understand it.

Mobile: the gateway drug

Devices like the Rift, or its

upcoming competitors like
Sonys as-yet-unnamed Project

3D WorlD October 2015

Morpheus, will bring virtual

reality to gamers worldwide.
But for Phillip, the technology that
will finally bring VR to the masses
is the mobile phone. Rather than
producing headsets with built-in
screens, companies like Google
and Samsung are developing
low-cost mounts into which you
can slot your existing smartphone
and use it as a virtual reality
viewing device.
I think the portable versions,
whether its Google Cardboard,
or the Samsung Gear VR, or the
Zeiss VR One, are whats going to
spread virtual reality, says Phillip.
Theyre going to get people
saying, OK, this is what VR is.
Now I want the real experience.
Then theyll try it on a PC and
actually [be able to] look at things
up close. I look at the portable
versions as the gateway drug into
full-immersion virtual reality.
For more on Nurulizes work,
FYI go to


vr At the cIneMA

The next big thing for movies?

Virtual reality holds exciting
possibilities for the world of
film. This is just the infancy
of VR, says Scott Metzger.
I think in the next few years
youre going to see really great
things. One possibility is that
VR will become a standard part
of the cinema experience, in the
same way that stereoscopic 3D
is now. To get a really amazing
experience with a movie, [youre]
going to need huge amounts of
data, says Scott, pointing out
that this will be easier to deliver
directly than via the internet.
In future, I think theaters will
have huge servers to stream
that data locally. You just bring
your headset and plug it in. That
would be pretty phenomenal.

hardware review

HARDwARe Review

Price 3,384.11 plus VAT | comPany Overclockers | website

author Profile
James Morris
James Morris has
been writing about
technology for two
decades, focusing
on content creation
hardware and
software. He was
the editor of PC
Pro magazine
for five years.

verclockers is a PC
manufacturing brand with a
solid reputation in high-end
gaming systems. The company is
now transferring its skills to the
professional market and launched
a new workstation range called
RENDA. Weve looked at the PWE7F, and it shows potential.
The main component choices
wont raise eyebrows. The CPU is
an Intel Core i7, which as we have
come to expect has been frequency
enhanced. The Core i7-5960X
is rated at a nominal 3GHz, but
Overclockers have permanently set
it to 4.2GHz, kept under control by a
meaty water-based cooling system,
although the CPU will still drop
down when not in heavy use. Its
an Extreme processor, with eight
physical cores, doubled to 16 virtual

cores by Intel Hyper-Threading.

The Core i7 is partnered with
32GB of 2,133MHz DDR4 memory
arranged as four modules, so full
advantage is taken of the processors
quad-channel architecture, leaving
four slots for upgrade.
Overclockers has taken the
unusual decision to equip the PWE7F with AMD FirePro graphics,
despite Nvidias Quadros having
around 80 per cent of the market.
This is relatively wise because
the W8100 is the pick of AMDs
bunch. Its a little more expensive
than a Quadro K4200, but comes
with twice the GDDR5 frame
buffer 8GB instead of 4GB and
a healthy 2,560 Stream Processors,
with around twice the bandwidth
at 320GB/sec thanks to a 512-bit
interface. It also offers a quartet

of 4K-supporting DisplayPort
connections, should you want a
huge video panel array.
Storage is predictably solid state
disk and regular hard disk. The
former is a 256GB Samsung 850
Pro and the latter a 2TB Seagate
Barracuda 7200.14 a reasonable
configuration. However, the
RENDA doesnt include an optical
drive or multi-format memory card
reader. You wouldnt use these
every day, but its useful to have
them. We also might have expected
a PCI Express-based M.2 solid state
disk, since the Asus X99-E WS
motherboard supports this.

Strong all-rounder

The PW-E7F proved a capable allrounder. The Maxon Cinebench

R15 rendering result of 1,717 is very
commendable, and the OpenGL
result of 183.21 is excellent. The
SPECviewperf 12.02 tests were
very good too. The result of 70.84
in Maya-04 shows itll be a great
system for modelling with Maya,
and the sw-03 score of 87.7 implies
good capabilities with SolidWorks.
Other highlights are 3.7 in
energy-01, 55.83 in showcase-01,
27.28 in medical-01 and 79.12 in

Overclockers has taken

the unusual decision to
equip the PW-E7F with
AMD FirePro graphics
RENDA is the
new professional
workstation brand
from Overclockers,
and it shows
promise with very
good rendering
and modelling
Watching the clock
Overclocking may seem shady but with appropriate cooling and settings theyre easily able to run
for extended periods above official ratings, with no concerns over stability. Hence manufacturers
are willing to provide at least three years of warranty for their frequency enhanced settings.

3d World October 2015


snx-02. Overall, the W8100 provides

abilities about equal or better than a
Nvidia Quadro K4200, so great for
any kind of modelling or design.
With very good rendering
and modelling performance, the
RENDA PW-E7F is strong in every
area. Its not particularly cheap but
an impressive debut. We cant say
yet how good Overclockers will
be at supporting the professional
3D content creation market, but
the hardware shows promise and
exemplary build quality.


3GHz Intel Core i75960X at 4.2GHz

32GB 2,133MHz
AMD FirePro
W8100 with 8GB
GDDR5 memory
256GB Samsung
850 Pro solid
state disk
2TB Seagate
Barracuda 7200.14
SATA 7,200rpm
hard disk
Warranty: 5 years,
3 years collect
and return, 2
years labour

Software review

Unfold3D Generation 9
Price 299 (Freelance licence) | comPany Polygonal Design | website


Can select by
group, material or
smoothing group
Engine overhaul
it unwraps faster
New brush tools
New island tools
easier UI

hings have been quiet on

the UV mapping front lately.
Hopefully, other developers
are doing what Polygonal Design
has been doing: rethinking their
entire UV mapping applications.
In Unfold3D 9, that rethink shows.
Even though commands and
workflow remain the same, bar
some key remaps, the new UI is
significantly easier to work with,
down to touches like redesigning
most of the lower program bar
so you can customise your UV
mapping parameters even more,
and adding new tools, like the
option to prevent overlaps.
However, the new versions
main strengths are its speed and
changes that improve workflow,
like the updated unwrapping tools.
Unfold3D has done away with the
Relax function, and has replaced it
with Pinch, Drag or Spread, along

Generation 9 is a
significant update
to Unfold3D
author Profile
Cirstyn is a freelance
CG artist and educator,
with over a decades
experience in 3D,
focusing on modelling
and texturing.

with accompanying, customisable

brushes. Island selection has also
been made easier, as you can now
finally select assets by group,
material or smoothing group.
Support for Lua scripting has been
added, and the previously strict
mesh import regime relaxed.

Overall, Unfold3D 9 is a very

good overhaul, and worth looking
into if you need something that
unwraps fast, with no muss or fuss
especially if youre a student, as
you get a 50 per cent discount.

Software review

PhotoScan 1.1.6
Price $179 (Standard: version reviewed) $3,499 (Professional) | comPany Agisoft | website

Creates meshes
and textures in
standard formats
Most processes
version available

uilding stuff in 3D can be

laborious, even if you have
good reference material to
work from. PhotoScan, Agisofts
photogrammetry software, has
been designed to do a lot of the
heavy lifting for you, using a set
of reference photos of an object
to generate a working textured
mesh that you can take into
your 3D application of choice
to retopologise.

PhotoScan quickly
creates usable
meshes and
textures from

author Profile
Mike Griggs
Mike Griggs has been
polishing pixels since
1995. He has worked in
many sectors of digital
content creation, from
website design to 3D
and VFX for film and TV.

automated workflow

The workflow is relatively

straightforward. You take photos of
the object you want to model from
various angles, import them into
PhotoScan, align the images, and
let the software create the mesh.
Most of the process is
automated, although masking the
photos manually can help, but in
practice we found that PhotoScan

needs a lot of steady, sharp

photographs to get a really good
model, and that the mask tools are
fairly basic. You will also need a
computer with a decent GPU the
software is OpenCL-accelerated
otherwise mesh creation can take
some time.

3D WorlD October 2015


PhotoScan does give you a good

way of getting accurate models into
your computer, but it shouldnt be
regarded as a quick fix: it takes time
to get the images right in order to
achieve decent results.

up today!
Visit Google Play,
Apple Newsstand
and Zinio stores
to download a 3D
World back issue to
your tablet or

Back issues
Missing an issue of 3D World?
Fill the gaps in your collection today!

Issue 198 September 2015

ZBrush robots
Master hard surface modelling in ZBrush
25 essential Cinema 4D techniques
Create perfect render cycles for video games
Graphics card review group test
Downloads iClone 5 software, ZBrush model, video tutorials, Substances!

Issue 197 August 2015

Video games art special
Professional games artists share
their work and insights
Model a mecha character for
video games
Create perfect run animation
cycles for games
Master Unreal Engine 4, includes
video tutorial, models and textures
Downloads iClone 5 worth 69,
video tutorials and project files

Issue 196 July 2015

VFX special

Issue 195 June 2015

Photoreal portraits

40 years of ILM, plus discover the

/VFX of Avengers: Age of Ultron
Model Star Wars inspired
Master the new modelling tools
of Maya 2016
Create an epic sci-fi environment
in Modo
Downloads Video tutorials,
project files, resources and more!

Create a lifelike portrait with

effective modelling & rendering
Sculpt armour in ZBrush
Meet Chappie: how Image
Engine brought the robot to life
Industry experts advice for
kickstarting your career in CG
Downloads Free book, video
tutorials, project files, resources
and more!

Issue 194 May 2015

Make a Star Wars movie
Create your own VFX movie
Master mech modelling in
Cinema 4D and ZBrush
Star Trek interview: Pierre Drolet
talks building starships
The ultimate guide to lighting
and rendering a complex
illustration in LightWave
Downloads Video tutorials,
project files, resources and more

Google Play

3D WorlD October 2015


Apple Newsstand

Issue 193 April 2015

ZBrush anime skills

Issue 192 March 2015

Model magical ZBrush Creatures

Master the art of modelling an

anime style character
ZBrush 4R7: why the latest
release is an essential upgrade
Create a collectible action figure
10 years of CG in anime: meet
the directors who are creating
the leading 3DCG
Downloads Video tutorials,
project files, resources and more

Create our cover character.

Complete tutorial with video
and model!
The ultimate guide to mastering
Maxwell Render
Big Hero 6: discover Disneys
latest software
Matte painting for video games
Downloads Video tutorials,
project files, resources and more

Issue 191 February 2015

Master 3D print modelling

Issue 190 January 2015

Create ZBrush robot Art

Model a robot toy for 3D printing

and finishing tips
25 Modo tips to reinvent
your sculpting
Star Wars VII: what the industry
really thinks
Speed up your workflow with
Mayas Polygon tools
Downloads Video tutorials,
project files, resources and more

Unleash your modelling skills to

create a killer sci-fi predator
Learn to use 3ds Max to create
your own film
25 Maya tips for making game
The ultimate guide to the 3D
printing phenomenon
Downloads Video tutorials,
project files, resources and more

Featuring the winners,
with 16 pages of
extra content!

Issue 189 Christmas 2014

Create Photoreal Vehicles

Issue 188 December 2014

Master Pixars renderMan

Give your car V-Ray renders

an artistic angle
Learn to blend photography
and ZBrush models
How to model a complex Modo
The VFX of Star Wars: Discover
new photos and interviews
Downloads Video tutorials,
project files, resources and more

Issue 187 November 2014

Expert renders

Get started in RenderMan 19

with this official Pixar tutorial
Learn to use Bifrost and nParticles
in Maya 2015 for realistic rain
Model a lifelike cityscape
using CityEngine
Discover the character VFX
behind Guardians of the Galaxy
Downloads Video tutorials,
project files, resources and more

Issue 186 October 2014

Create award-winning animation

Master an advanced setup in

V-Ray for perfect renders
ZBrush 4R7: discover new tools
that will reshape your art
The making of The Lord
Inquisitor using Cryteks Cinebox
Develop your LightWave
modelling skills
Downloads Video tutorials,
project files, resources and more

3D WorlD October 2015


Model a perfect cartoon figure

for use in animation
Learn how to design dynamic
heros and villains
Sculpt DC Comics supervillain
Catwoman in ZBrush
Ed Hooks on how to make every
performance matter
Downloads Video tutorials,
project files, resources and more

My inspiration

My inspiration

Davide Bianca
The CEO of Saizen Media explains how a love of stories and comics
developed into a successful global CG art studio

Davide Bianca
Davide is founder,
CEO and executive
director of Saizen
Media, an awardwinning full service
digital creative agency
with offices in Los
Angeles and Milan.

y love for visual

storytelling dates
back as far as I can
remember. My first encounter
with a pencil was love at first
sight, and ever since then that
love has only grown. Doodling
became drawing, drawing
become writing stories and
inking comic book pages, and
illustrating became animating.
Meanwhile, my passion for
robots led me to wonder what it
is that makes a computer work.
Once I understood the inherent
potential lying within the
marriage between creativity and
technology, I was hooked.
Computer science gave me a
solid understanding of software,
the nuts and bolts that make
things work under the hood,

but I was always too much of an

artist to be satisfied with creating
something without a proper
visual output.
By this time, the internet was
evolving, and browser games
and web experiences started
appearing, but everything was
very dull, basic and sterile. I
set out to start Saizen Media, a
storytelling-centric web agency
determined to blur the line
between web, gaming and film
by focusing on experience and
relying on the same techniques
used by the film and game
industries: heavy use of CG,
digital matte painting, immersive
sound design, and so on.
Over a decade later, our
websites have evolved into
apps or VR experiences, and we

3D WorlD October 2015


now provide VFX, key art and

concept art services to the very
same gaming and film studios
that inspired us while growing
up, but the mission remains
unaltered: telling great stories
through powerful visuals. With

Cover art for

3D World issue
183: part of Saizen
Medias Slavers
CG comic

Once I understood the potential lying

within the marriage between creativity
and technology, I was hooked
the availability and accessibility
of great software and technology
today there is not a single reason
that justifies not doing what you
truly love for a living.
See more of Saizen Medias
FYI work at