Professional Documents
Culture Documents
physlight is weta’s system for reproducing the entire imaging chain of a physical scene.
PHYSLIGHT
…and simulating the response of digital stills and cinema cameras to be able to image a digital scene as if it was shot with a real camera.
PHYSLIGHT
it’s also production proven: we’ve been using it on all shows for the last 5 years.
PHYSLIGHT
Luca Fascione Luke Millar
Before we begin, this is mostly Luca’s work but many people at weta have contributed on both the technical and production side a few of whom we’d like to thank
here
6 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
To understand the problem we’re trying to solve, let’s consider a simple scene.
7 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
To understand the problem we’re trying to solve, let’s consider a simple scene. light leaves a light source, bounces around a bit, and is captured by a camera
8 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
In visual effects we spend a lot of time trying to make CG objects feel like part of the photography…
12 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
…we do a lot of work on top to make shots sing, but making our CG ape feel like it’s a physical object there in the scene, shot with the picture camera is the
foundation on which the rest of the lighting is built.
So knowing these values would be hugely beneficial.
PHYSLIGHT
TRADITIONAL METHOD
Traditionally though (since the film days), we assumed that those functions were unknowable or too hard to measure. So instead we’d ignore the camera response
and set our lighting to match a known calibration object
PHYSLIGHT
TRADITIONAL METHOD
CG Ref
CG Ref
And you tweak the exposure and tint parameters of your IBL until the CG ball matches the reference ball. And you hope that your CG ball BRDF matches the real
one, even after it’s been out under the elements for several weeks
PHYSLIGHT
TRADITIONAL METHOD
The lighting is now in some arbitrary space relative to the grey ball reflectance, so we use ‘stops’ as a unit of relative exposure when talking about brightness of
lights, since we don’t know what the actual values in the scene were
PHYSLIGHT
TRADITIONAL METHOD
The trouble is now our camera response and our lighting are baked into a single tint value on the IBL. If we want to share light rigs between shots but the camera
settings have changed, we can’t because there’s no way to separate the contribution of the camera from that of the lights
PHYSLIGHT
TRADITIONAL METHOD
A common solution is to grade a series of shots assumed to have “similar” lights and camera settings so that they match visually, then light to that.
PHYSLIGHT
TRADITIONAL METHOD
This doesn’t solve the problem, but at least makes like shots consistent. But they will be subtly wrong as variation that should be present has been graded out. All
this grading and tweaking is a lot of time that would be better spent making great-looking images
PHYSLIGHT
TRADITIONAL METHOD
‣ Measured sources
‣ Fire etc.
We also have no way of representing physical values. How bright should the sky be in stops? How bright should the illumination from a 2k or a practical fire
appear on a character in the shade?
PHYSLIGHT
TRADITIONAL METHOD
‣ Measured sources
‣ Fire etc.
We can calculate, or we can measure, the correct, absolute physical values for all these sources, but we have to throw that information away because we don’t
have a framework we can use it in - we have to figure out our correct value in “stops” for each new plate and lightsource combination by trial and error because
the brightness of our lights implicitly depends on the plate exposure settings
PHYSLIGHT
TRADITIONAL METHOD
‣ We can do better!
In the film days this relative matching to ref was the best we could do. But nowadays most shoots are digital. We don’t have to deal with chemical baths and film
scanners any more so there must be some function we can measure that relates exposure to pixel values
PHYSLIGHT
So…
PHYSLIGHT
H
26 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
So given a radiance entering the camera system we first want to find the exposure or energy density, H, at the camera sensor…
PRGB
PHYSLIGHT
L
Wcam
H
27 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
…and then we want to find some function W_cam, that tells us how each camera converts that to an RGB pixel value, P.
PHYSLIGHT
DIGITAL SOLUTION
Conversely, if we have captured an HDRI with a measured camera, we apply the inverse to find the exposure and hence the radiance that arrived at each
photosite. We can then use that to reconstruct the absolute physical values of the lighting in the scene
IMAGING
- H depends on:
‣ t - exposure time in seconds
‣ S - ISO sensitivity
- H depends on:
‣ t - exposure time in seconds
‣ S - ISO sensitivity
We use T-stop rather than f-number for aperture as this is the value we get from cine cameras, and we can then ignore the lens transmission in this formula.
IMAGING
IMAGING RATIO
- H depends on:
‣ t - exposure time in seconds
‣ S - ISO sensitivity
The calibration constant, C, is basically the overall sensitivity scale of the sensor. We start with a baseline value of 312.5 for historical reasons, and then measure
each camera’s sensor relative to that.
IMAGING
IMAGING RATIO
Note that we’re also ignoring the angle that light hits the sensor as well. Basically this means we’re ignoring some portion of vignetting (since we’re just going to
apply it to taste in comp anyway).
IMAGING
https://github.com/wetadigital/physlight/blob/master/physlight_imaging.ipynb
There’s a notebook working through an example of this at the physlight repo on the wetadigital github
IMAGING
Once we’ve got some amount of energy, H, at the sensor, the sensor response function, converts spectral exposure to RGB pixel values.
IMAGING
SENSOR RESPONSE
‣ Equivalent to using CIE Standard Observer but to Camera RGB space rather than XYZ
You can think of it like the CIE standard observer functions, except that instead of going to XYZ we go first to camera RGB space (and from there XYZ and then to
whatever output space we want).
IMAGING
SENSOR RESPONSE
But how do we get those weighting functions? How do we go about measuring the spectral sensitivity of a camera that we’re interested in?
MEASUREMENT
So how do we go about measuring the spectral sensitivity of a camera that we’re interested in?
MEASUREMENT
Actual
Actual
As the sensitivity is still unknown you don’t have features in the image to register to
DESIGN 264mm
22
m
m
m
m
32
16mm
DESIGN 3: A “TRANSMISSIVE MACBETH CHART”
m
m
32
m
- Use narrow-band bandpass filters
m
32
60mm
264mm
66mm
m
m
35
m
4m
th
10 m
id
m
h 8m
w
0u
ve
pt th
oo
de id
Gr
nt t w
de n
In nde
Grey to represent Circular holes have retention
I
cut through hole lip of 0.5mm
- CamSPECS XL
https://www.image-engineering.de/products/equipment/measurement-devices/588-camspecs-express
light source
camera mount
… which come from spectral radiance distributions shaped like this plot on the left hand side
MEASUREMENT
R = hp, r̄i
G = hp, ḡi
<latexit sha1_base64="41bkFCX3jDuLA/QyxZ4VKtXZg6w=">AAACSHicdZBLSwMxFIUz9VXHV9Wlm2ARXEiZ6cPORii60GUt9gGdoWTStA3NZIYkI5TSn+fGpTt/gxsXirgznVaoVg8EDue7lyTHjxiVyrKejdTK6tr6RnrT3Nre2d3L7B80ZBgLTOo4ZKFo+UgSRjmpK6oYaUWCoMBnpOkPr6a8eU+EpCG/U6OIeAHqc9qjGCkddTKdGryALkO8zwiMzlwfCSigK2aB65rXy7y/yC+Xub/AO5mslXPO8+ViCWpTtkqOkxinWLChnbMSZcFc1U7mye2GOA4IV5ghKdu2FSlvjISimJGJ6caSRAgPUZ+0teUoINIbJ0VM4IlOurAXCn24gkm6uDFGgZSjwNeTAVID+ZtNw79YO1Y9xxtTHsWKcDy7qBczqEI4bRV2qSBYsZE2CAuq3wrxAAmEle7e1CV8/xT+bxr5nF3I5W+L2UptXkcaHIFjcApsUAYVcAOqoA4weAAv4A28G4/Gq/FhfM5GU8Z85xD8UCr1BSNnrpU=</latexit>
B = hp, b̄i
Given a pixel, we’ll have a readout from the camera that is simply the scalar product of the incoming spectral power distribution, p, and the response of the
device, r bar, g bar or b bar.
CLICK We will do this for a bunch of different narrow-band filters, so we’ll actually get a family of these readouts
MEASUREMENT
Z
hp, r̄i = p( )r̄( )d
X
= pi r̄i = (p1 r̄1 + · · · + pn r̄n )
i
<latexit sha1_base64="pSqeOHawqN5DqO95kQHQdecmcFI=">AAACvXicdZFdb9MwFIad8DXKV4FLbo6oQC1MVdJ1NEKamAQXXA5Et0l1FRzH7cwcx7JPQFXUP4m44d/gtgmCCY5k6fF73nNsH2dGSYdR9DMIr12/cfPW3u3Onbv37j/oPnx06srKcjHlpSrtecacUFKLKUpU4txYwYpMibPs8u0mf/ZVWCdL/QlXRswLttRyITlDL6XdH1QxvVQCzD7NmAUL1O6E53AEVGpMvaPIcgam39Cgcf7eQ94QpZ1tmauKVILxa2fdwDuhkEHb7Oiq0Pf2uLXH8BIoz0t0HkyqW10P2hMKhheItbf0zb4dwAtwhnGpl2ugNO32omHyajQZH4KHSXSYJFtIxgcxxMNoGz3SxEna/U7zkleF0MgVc24WRwbnNbMouRLrDq2c8O0v2VLMPGpWCDevt9NfwzOv5LAorV8aYav+WVGzwrlVkXnn5tbuam4j/is3q3CRzGupTYVC891Bi0oBlrD5SsilFRzVygPjVvq7Ar9glnH0H97xQ2hfCv+H09EwPhiOPox7xx+bceyRJ+Qp6ZOYTMgxeU9OyJTw4HXwOZDBl/BNKEIV6p01DJqax+SvCL/9Aqz80gg=</latexit>
= dot(p, r) ⇤ spacing
The scalar product has a few different expressions depending on where you are in an analytical or discrete context, in numpy you can use the dot() operator. In
discrete numerics it is the process of multiplying two vectors element-by-element and adding it all together
MEASUREMENT
8 9
>
> R0 = p0,0 r̄0 + . . . + p0,n r̄n >
>
>
< R1 >
=
= p1,0 r̄0 + . . . + p1,n r̄n
..
>
> . >
>
>
: >
;
Rn = pn,0 r̄0 + . . . + pn,n r̄n i20,...,n
<latexit sha1_base64="IhMpB4JJAXQgDJoQRowWuIEpgCs=">AAAC2HicdVLLbtQwFHXCq4RHAyzZWFSMkFqNnGnLZIM0EpsuS8W0FeNR5DjOjFXHiWynaGRFYgFCbPkEPokdX8Ev4CRDBXS4kqWjc++5T6eV4Nog9MPzb9y8dfvO1t3g3v0HD7fDR49PdVkryqa0FKU6T4lmgks2NdwIdl4pRopUsLP04nXrP7tkSvNSvjWris0LspA855QYRyXhTyxYbrANcMoWXFqiFFk1VlHRBCcJgoNXA2irxKI91OCUKKgStItFVhq929GygWu+RdiJoitRtFkUXRPBAb5svQPYp5BXKeTmFPJaCsxktu4+wIovlgY3ieXYzYT2eqHTNEm4g4bxy9H44BA6MEaHcdyB+GA/gtEQdbYzWYbfJtHR++Mk/I6zktYFk4YKovUsQpWZu0qGU8FcrVqzitALsmAzByUpmJ7b7jANfO6YDOalck8a2LF/KiwptF4VqYssiFnqf30tuck3q00ezy2XVW2YpH2hvBbQlLC9Msy4YtSIlQOEKu56hXRJFKHG/YXALeH3pPD/4HQ0jPaHozduGyegty3wFDwDL0AExmACjsAxmALqTT3rffQ++e/8D/5n/0sf6ntrzRPwl/lffwEhXuHR</latexit>
<latexit sha1_base64="QeAWYJi2MFl3sSLm/yNPZd1WwQ0=">AAACKXicdVDLSgMxFM34rPVVdekmKIJCKTN92NkIBTcuq9gqdMqQyaSaNpOZJhmhDN35LW66cuFfuFFQ1K3f4N60VVHRA4HDOfdwc48XMSqVaT4bE5NT0zOzqbn0/MLi0nJmZbUuw1hgUsMhC8WphyRhlJOaooqR00gQFHiMnHid/aF/ckGEpCE/Vr2INAN0xmmLYqS05GYqkZvQbLsP92Dk0m2H6aiP3PYOdLrdGPnwS4EO5dBJCraZdZgfKpkt502n72Y2zZy9my8XS1CTslmy7RGxiwULWjlzhM1K8e3m8nqQqrqZe8cPcRwQrjBDUjYsM1LNBAlFMSP9tBNLEiHcQWekoSlHAZHNZHRpH25pxYetUOjHFRyp3xMJCqTsBZ6eDJA6l7+9ofiX14hVy24mlEexIhyPF7ViBlUIh7VBnwqCFetpgrCg+q8QnyOBsNLlpnUJn5fC/0k9n7MKufyhbuMIjJEC62ADbAMLlEEFHIAqqAEMrsAteACPxsC4M56Ml/HohPGRWQM/YLy+A4XBqWU=</latexit>
pi,j = pi ( j) j 2 {380, . . . , 720}
So for example, for the red channel we’d have a few of these dot products, one per filter. The p_ij’s are samples of the lighting through the filter at a few
wavelength bins, we’ll pick as many bins as we have filters. So now we have a linear system, with as many unknowns as we have equations
MEASUREMENT
8 9
< p0,0
> ... p0,n >
= R = {R0 , . . . , Rn }
.. ..
<latexit sha1_base64="bZio0bFO2P4hZ1XDHK9rbkLtvAg=">AAAB/3icdVC7SgNBFJ2NrxgfiQo2NoPBRxHCbh5mGyFgYxmDeUA2LLOTSTJkdnaZmRXCmsJfsbFQxNYPEHs7/8ZJoqCiBy4czrmXe+/xQkalMs13I7GwuLS8klxNra1vbKYzW9tNGUQCkwYOWCDaHpKEUU4aiipG2qEgyPcYaXmjs6nfuiJC0oBfqnFIuj4acNqnGCktuZndOjyFTlx3zZzDeoGSubrLnYmbyZp5+6RQKZWhJhWzbNszYpeKFrTy5gzZasE5en05TNfczJvTC3DkE64wQ1J2LDNU3RgJRTEjk5QTSRIiPEID0tGUI5/Ibjy7fwIPtNKD/UDo4grO1O8TMfKlHPue7vSRGsrf3lT8y+tEqm93Y8rDSBGO54v6EYMqgNMwYI8KghUba4KwoPpWiIdIIKx0ZCkdwten8H/SLOStYr5wodOogzmSYA/sg2NggQqognNQAw2AwTW4Bffgwbgx7oxH42nemjA+Z3bADxjPH1UfmAk=</latexit>
P = ..
> . . . >
: ;
pn,0
<latexit sha1_base64="owMr4Xd9Vj9xqcgkdgU2ZiYk4WU=">AAACbnicdVHNatwwEJbdn6RO22wa6CGhRDQ05BAWe5N0fSkEcslxW7pJYLVsZXm8KyLLRhoHFuNjnyQP0nforc/QS1+gUK13C21oB4Q+fd98M9IoKZW0GIbfPP/Bw0eP19afBBtPnz3f7Gy9uLRFZQQMRaEKc51wC0pqGKJEBdelAZ4nCq6Sm/OFfnULxspCf8R5CeOcT7XMpODoqEnn84C+o0xBhqwOWAJTqWtuDJ83tRCiCcpJHR6FDT1gKi3QHtCW0A1lLKDstuVYutzaE10oLknfd+mVi4FOVy0CZuR0hqyZdPbDbvy21z85pQ70w9M4bkF8chzRqBu2sX/26Wf5ZYfdDSadrywtRJWDRqG4taMoLHHsyqIUClzhykLJxQ2fwshBzXOw47odV0PfOCalWWHc0khb9k9HzXNr53niMnOOM3tfW5D/0kYVZvG4lrqsELRYNsoqRbGgi9nTVBoQqOYOcGGkuysVM264QPdDgRvC75fS/4PLXjc67vbeu2l8IMtYJ7vkNTkkEemTM3JBBmRIBPnubXk73q73w3/pv/L3lqm+t/Jsk7/CP/wFSE69vw==</latexit>
... pn,n r̄ = {r̄0 , . . . , r̄n }
<latexit sha1_base64="EAKROTFP0wTp6nqUKTFAzzwXzbI=">AAACDnicdZDLSgMxFIYz9VbrpVWXboKl6qKUmWlrZyMU3LisYi/QKSWTpm1oJjMkGaEMfQI3voobF4q4El27821ML4KK/hD48p9zSM7vhYxKZZofRmJpeWV1Lbme2tjc2k5ndnYbMogEJnUcsEC0PCQJo5zUFVWMtEJBkO8x0vRGZ9N685oISQN+pcYh6fhowGmfYqS01c3kXA8JKOApdOM5ds28y3qBkvnFnbuTbiZrFpwTu1IqQw0Vs+w4M3BKRQtaBXOmbNV2j15fDtO1bubd7QU48glXmCEp25YZqk6MhKKYkUnKjSQJER6hAWlr5MgnshPP1pnAnHZ6sB8IfbiCM/f7RIx8Kce+pzt9pIbyd21q/lVrR6rvdGLKw0gRjucP9SMGVQCn2cAeFQQrNtaAsKD6rxAPkUBY6QRTOoSvTeH/0LALVrFgX+g0LsFcSbAPDsAxsEAFVME5qIE6wOAG3IEH8GjcGvfGk/E8b00Yi5k98EPG2ydeAJ4S</latexit>
But if you think about it for a second you can see easily that if we put out p_ij’s into a square matrix, the readouts in a vectors and our unknown r bar into another
vector … CLICK
MEASUREMENT
8 9
< p0,0
> ... p0,n =
> R = {R0 , . . . , Rn }
.. ..
<latexit sha1_base64="bZio0bFO2P4hZ1XDHK9rbkLtvAg=">AAAB/3icdVC7SgNBFJ2NrxgfiQo2NoPBRxHCbh5mGyFgYxmDeUA2LLOTSTJkdnaZmRXCmsJfsbFQxNYPEHs7/8ZJoqCiBy4czrmXe+/xQkalMs13I7GwuLS8klxNra1vbKYzW9tNGUQCkwYOWCDaHpKEUU4aiipG2qEgyPcYaXmjs6nfuiJC0oBfqnFIuj4acNqnGCktuZndOjyFTlx3zZzDeoGSubrLnYmbyZp5+6RQKZWhJhWzbNszYpeKFrTy5gzZasE5en05TNfczJvTC3DkE64wQ1J2LDNU3RgJRTEjk5QTSRIiPEID0tGUI5/Ibjy7fwIPtNKD/UDo4grO1O8TMfKlHPue7vSRGsrf3lT8y+tEqm93Y8rDSBGO54v6EYMqgNMwYI8KghUba4KwoPpWiIdIIKx0ZCkdwten8H/SLOStYr5wodOogzmSYA/sg2NggQqognNQAw2AwTW4Bffgwbgx7oxH42nemjA+Z3bADxjPH1UfmAk=</latexit>
P = ..
> . . . >
: ;
pn,0
<latexit sha1_base64="owMr4Xd9Vj9xqcgkdgU2ZiYk4WU=">AAACbnicdVHNatwwEJbdn6RO22wa6CGhRDQ05BAWe5N0fSkEcslxW7pJYLVsZXm8KyLLRhoHFuNjnyQP0nforc/QS1+gUK13C21oB4Q+fd98M9IoKZW0GIbfPP/Bw0eP19afBBtPnz3f7Gy9uLRFZQQMRaEKc51wC0pqGKJEBdelAZ4nCq6Sm/OFfnULxspCf8R5CeOcT7XMpODoqEnn84C+o0xBhqwOWAJTqWtuDJ83tRCiCcpJHR6FDT1gKi3QHtCW0A1lLKDstuVYutzaE10oLknfd+mVi4FOVy0CZuR0hqyZdPbDbvy21z85pQ70w9M4bkF8chzRqBu2sX/26Wf5ZYfdDSadrywtRJWDRqG4taMoLHHsyqIUClzhykLJxQ2fwshBzXOw47odV0PfOCalWWHc0khb9k9HzXNr53niMnOOM3tfW5D/0kYVZvG4lrqsELRYNsoqRbGgi9nTVBoQqOYOcGGkuysVM264QPdDgRvC75fS/4PLXjc67vbeu2l8IMtYJ7vkNTkkEemTM3JBBmRIBPnubXk73q73w3/pv/L3lqm+t/Jsk7/CP/wFSE69vw==</latexit>
... pn,n r̄ = {r̄0 , . . . , r̄n }
<latexit sha1_base64="EAKROTFP0wTp6nqUKTFAzzwXzbI=">AAACDnicdZDLSgMxFIYz9VbrpVWXboKl6qKUmWlrZyMU3LisYi/QKSWTpm1oJjMkGaEMfQI3voobF4q4El27821ML4KK/hD48p9zSM7vhYxKZZofRmJpeWV1Lbme2tjc2k5ndnYbMogEJnUcsEC0PCQJo5zUFVWMtEJBkO8x0vRGZ9N685oISQN+pcYh6fhowGmfYqS01c3kXA8JKOApdOM5ds28y3qBkvnFnbuTbiZrFpwTu1IqQw0Vs+w4M3BKRQtaBXOmbNV2j15fDtO1bubd7QU48glXmCEp25YZqk6MhKKYkUnKjSQJER6hAWlr5MgnshPP1pnAnHZ6sB8IfbiCM/f7RIx8Kce+pzt9pIbyd21q/lVrR6rvdGLKw0gRjucP9SMGVQCn2cAeFQQrNtaAsKD6rxAPkUBY6QRTOoSvTeH/0LALVrFgX+g0LsFcSbAPDsAxsEAFVME5qIE6wOAG3IEH8GjcGvfGk/E8b00Yi5k98EPG2ydeAJ4S</latexit>
8 8 1
< R = P r̄ < r̄ = P R
1
G = P ḡ ) ḡ = P G
: : 1
B = P b̄
<latexit sha1_base64="TMCFX3HuENBHR5f+HlvFWKqZ8C4=">AAACyHicdVFNb9MwGHYyPkb4KuPIxWJC4rIo6TaaC1I3BEOcSkW3SXWpbMdJrTpOsJ1NUdQLEn+QGyd+CRJOUlZA2ytZep7n/fT7kkJwbYLgh+Nu3bp95+72Pe/+g4ePHvee7JzqvFSUTWgucnVOsGaCSzYx3Ah2XiiGMyLYGVm+afxnF0xpnstPpirYLMOp5Amn2Fhp3vuJBEsMqj1EWMpljZXC1aqmK28MX8MRRAQrqCBC3smGpw0/3nDScMRkvE5HiqcL43voS4ljiMYNs578EnaKd2PTdTtb+XO9F67guK3c9bwST65EshGPr59h3tsN/OhVf3BwCC0YBIdR1ILoYD+EoR+0tjuc/vo2fHe0N5r3vqM4p2XGpKECaz0Ng8LMbFHDqWB2wlKzAtMlTtnUQokzpmd1e4gVfGGVGCa5sk8a2Kp/Z9Q407rKiI3MsFno/32NeJ1vWpokmtVcFqVhknaNklJAk8PmqjDmilEjKgswVdzOCukCK0yNvb1nl/Dnp/BmcNr3w32//9FuYww62wbPwHPwEoRgAIbgPRiBCaDOW2fpGKd0P7iFe+lWXajrrHOegn/M/fobYFfZaw==</latexit>
b̄ = P B
Our linear system reduces to a simple matrix-vector product. All we need to do is invert a square matrix P. Inverting matrices becomes increasingly delicate the
more they have rows, so it’s good to take a look to see if this operation is well behaved
MEASUREMENT
The plot on the right is P, you can see the matrix is strongly diagonally dominant, which makes it very stable for numerical inversion. Effectively these two plots are
showing the same data: on the right hand side each row of pixels is one of the curves from the left.
MEASUREMENT
Canon 1D Mk III
So now we’ve taken a spectral radiance and converted it to a pixel value as seen by our measured camera model. The final step is to take this from camera RGB
space to CIE XYZ, from where we can convert to anything we want.
IMAGING
CAMERA RGB
‣ Manual solve
To do so we’ll need a camera RGB to XYZ matrix. Unfortunately camera manufacturers don’t publish these so we can either rely on someone else finding it for us,
e.g. dcraw, or we can derive them ourselves.
IMAGING
CAMERA RGB
? ? ?
[? ? ?]
× ? ? ? =
Fortunately the process is fairly simple - just convert some set of spectral reflectances to both camera RGB and to XYZ, then solve for a 3x3 matrix that converts
from one to the other.
IMAGING
CAMERA RGB
We’re still experimenting with different combinations of colours and solvers. In general though, a large data set like that in rawtoaces combined with a linear least
squares solver is pretty good.
IMAGING
WHITE BALANCE
- Can either:
4. Use Bradford/CAT02
- All valid but consider what you’re matching to
Again we have a notebook in the physlight repository showing how to use measured camera data and solve matrices for Camera RGB to XYZ using different
methods for white balance
76 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
So now we’ve defined how to take radiance arriving at the camera and turn that into a pixel value as generated by our camera model.
πtS
H= ⋅L PRGB = Wcam(H, λ)
CN 2
That just leaves us with how we want to define the light entering the scene
UNITS
WHY PHOTOMETRIC UNITS?
- Standard in industry
‣ Light meters
‣ How many Watts of visible light does a 100W bulb emit? Not 100W!
We use photometric units for lights as that’s what’s common in the photographic and film industry for talking about brightness. Your light meter works in
photometric units. If you look up data for a fixture it will be in photometric units.
LIGHTS
LIGHT UNITS
We’ll consider area lights and IBLs or environment lights here. We use lumens for area lights and lux for environment lights. Manuka works in spectral radiance
rather than photometric units, so we’ll need to convert from one to the other when we render
LIGHTS
AREA LIGHTS
̂ ⋅ D(ω)
L ↑ = T(λ) ⋅ L(λ) [ m 2 ⋅ sr ⋅ m ]
W
- ̂ - spectral distribution
L(λ)
‣ Blackbody
‣ Tabulated data
First, a spectral distribution. Most commonly this will be standard illuminant D65, but we also support blackbody specified by some temperature, and more
recently tabulated spectral data measured from real light sources on set.
LIGHTS
AREA LIGHTS
̂ ⋅ D(ω)
L ↑ = T(λ) ⋅ L(λ) [ m 2 ⋅ sr ⋅ m ]
W
‣ Constant colour
‣ Texture map
A tint function. Normally this would be a texture. These simulate gels or an emission pattern on the source, such as an LED array, or an HDRI of a practical fixture
LIGHTS
AREA LIGHTS
̂ ⋅ D(ω)
L ↑ = T(λ) ⋅ L(λ) [ m 2 ⋅ sr ⋅ m ]
W
‣ lambertian, D(ω) = 1
‣ IES profile
and an angular distribution. We use powered cosine distributions a lot for focussing area lights for example
LIGHTS
AREA LIGHTS ̂ ⋅ D(ω)
T(λ) ⋅ L(λ)
↑
L = Φv ⋅ [ m 2 ⋅ sr ⋅ m ]
W
ke
- Φv - luminous power in lm
We want the user to be able to specify the lighting in terms of the total output power, so we need to find some factor k_e that normalizes the radiometric output
based on those parameters and converts it to lumens
LIGHTS
and that’s just the integral of each term in the numerator. It’s trivial to compute this scale factor at render startup then just multiply it in when sampling the light
LIGHTS
ENVIRONMENT LIGHT ̂
T(ω, λ) ⋅ L(λ)
↑
L = Ev+ ⋅ [ m 2 ⋅ sr ⋅ m ]
W
ke
- Ev+ - illuminance from upper hemisphere
For an IBL, the illuminance as measured on an upward facing patch is a more natural and easier to use parameter than power. It also neatly corresponds to
measuring the illuminance of a real scene using a light meter pointed straight up.
Here again we can select an Illuminant D or measured spectrum for the light, which is multiplied by the Tint function, which will almost always be an HDRI
panorama captured on set.
LIGHTS
ENVIRONMENT LIGHT
To normalize the illuminance from the HDRI image we precalculate the illuminance from the upper hemisphere then store that in the exr metadata for easy
retrieval later.
LIGHTS
ENVIRONMENT LIGHT
2
̂ ⋅ Y(PRGB) ⋅ CN
L ↑ = L(λ)
πtS
88 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
The final piece of the puzzle is then knowing how to tie a pixel luminance back to the incoming radiance. That’s simply inverting the imaging ratio we saw earlier.
Again we can precalculate this and store it in the image header to multiply in at render time
So with all that in place, let’s shoot a test.
RESULTS
Here we’ve got a macbeth chart illuminated by a pair of blondes and we’ve shot it with an Alexa LF. We’ve also captured an HDRI from the chart position with a
5D that we’ll use to illuminate the scene.
First of all, we’ll generate correct radiance values from the HDRI using the precalculation we just described and use that to do a render but *without* considering
the camera at all
raw radiance
91 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
As you can see that comes out a little bit too bright. To fix that we’ll…
πtS
H= ⋅L
CN 2
raw radiance
92 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
add in the imaging ratio using the settings from the Alexa…
π ⋅ 0.02 ⋅ 800
H= ⋅L
312.5 ⋅ 11.9 2
That brings the pixel values into the correct exposure range as we’d expect. It’s pretty close in brightness but the colours on the chart are a fair bit off because
we’re just using CIE standard observer here, we’re not considering the LF’s spectral response
with sensor response
94 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
now if we apply the sensor response function using our measured curves for the Alexa, we get a much closer match
Here’s a crop of the plate focusing on the chart
and here’s the matching render with our model. the colours aren’t an exact match but they feel pretty close.
here’s the render just using the imaging ratio. I hope you can see on the video if I compare it to the plate that the colours are significantly warmer and more
saturated as this is just using CIE standard observer to go from spectral to XYZ.
If you were there on the day, this is closer to how you would have seen the chart, because that’s what the standard observer is designed to do after all: model
human vision. But we’re not interested in that, we want to image the scene as the camera saw it
99 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
The effects of metamerism in the sensor are more pronounced using measured spectra for the lights. In this example from Gemini Man, junior is illuminated by an
area light with the HDRI texture you see bottom-left.
the right-hand side of Junior, the grey ball, and each patch is rendered using a D65 spectrum multiplied by the uplifted values from the HDRI texture while the
left-hand side uses the full measured spectrum you see top-left, recorded on set.
With a spiky spectrum like this flourescent exam light there is a significant difference in colour rendition that we can capture with the spectral sensor model
100 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
For smooth, incandescent spectra like this arrimax 300, the two sides are almost identical: uplifting the RGB lightmap to a spectrum works well here since the
underlying emission spectrum of the light is of a similar shape.
101 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
Similarly, a slightly spikier spectrum like this HMI is pretty close as the overall shape of the measured spectrum is still fairly smooth
102 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
Physlight really shines in naturalistic scenes. In this shot from War for the Planet of the Apes, the fortress environment is lit by an HDRI captured on location. Due
to the large exposure difference between the sky and the torchlight, the interior lighting is augmented with hidden fixtures captured from similar lighting setups
on real-world set pieces.
103 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
The torches dotted around the interior have their brightness and colour derived from torches captured in the HDRI in a different scene. We then tuned the fire
simulation to give a blackbody emission matching the colours and brightness in the HDRI.
Because we know how the camera responds to physical radiance, we can now use it as a calibration device.
104 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
We can then put that same simulation into shots with very different camera and lighting setups and everything behaves correctly…
105 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
So, everything works perfectly and we can stop shooting ball passes now?
PHYSLIGHT
CONCLUSIONS
almost…
PHYSLIGHT
BENEFITS
When you’ve got good data it works very well indeed and even without good data, working in a consistent set of units gives you a framework for talking and
thinking about real-world lighting values which is hugely beneficial when deciding how to approach a lighting setup.
The things that stop up being able to just match a given shot out of the box are purely practical:
PHYSLIGHT
LIMITATIONS
The biggest one is that it’s not always possible to get an HDRI exactly matching your setup. You might not be able to get to the right location, or even capture
one at all. Overzealous sparks might move lights while you’re not looking, and the sky has a nasty habit of changing continuously. All this means your IBL will often
need a little bit of tweaking and so we still have exposure and tint parameters on our lights for that
PHYSLIGHT
LIMITATIONS
Depending on the camera kit, you may not get a full set of metadata for the camera settings, and manually recorded data is prone to human error and is difficult
to parse automatically. There are as many different ways of recording a T-stop value as there are data wranglers in the world.
And of course some people still do occasionally shoot on film.
PHYSLIGHT
STILL TO DO
‣ Neither is vignetting…
Here’s a fun one that Erik Winquist found recently: this is a series of brackets on aperture priority in an integrating sphere with the sigma 8mm that I think a lot of
us use for capturing HDRI. Since the camera is adjusting shutter speed to maintain exposure, the EV100 is roughly the same value…
113 ANDERS LANGLANDS & LUCA FASCIONE / PHYSLIGHT
© WETA DIGITAL LTD 2020
… but there’s over a stop difference between f/22 and f/3.5. I include this as an amusing example of how it often feels like the closer you get to getting a
“correct” answer on something, the more variables you discover that you hadn’t even considered needing to account for.
PHYSLIGHT
SUMMARY
So to summarize…
PHYSLIGHT
SUMMARY
specifying lighting in physical, photometric units is great, we should all be doing this
PHYSLIGHT
SUMMARY
‣ But if you’re stuck with RGB can just use imaging ratio
The spectral camera model can also give compelling results and capture effects of metamerization that a simpler model can’t…
…but if you’re stuck with an RGB renderer you can just use the imaging ratio part in order to be able to work with physical units
PHYSLIGHT
LINKS
- https://github.com/wetadigital/physlight
‣ Notebooks, slides
- https://github.com/mmp/pbrt-v4
The python notebooks with examples of the imaging function and camera model are avilable on the wetadigital github alongside these slides.
Also the upcoming version 4 of PBRT v4 includes an implementation of our camera model in the base renderer, and I have a fork on my github with an
implementation of everything we’ve talked about here so you can see how simple it is to add it to an existing renderer
PHYSLIGHT
ANDERS LANGLANDS & LUCA FASCIONE
WETA DIGITAL