Feeds:
Posts
Comments

(Backup) PBR models

Source:  https://google.github.io/filament/Filament.md

Author unknown.  Posting here as a backup just in case the original get deleted

Filament
Physically-based rendering engine

Contents

1  Overview
1.1  Principles
1.2  Physically based rendering
2  Notation
3  Material system
3.1  Standard model
3.2  Dielectrics and conductors
3.3  Energy conservation
3.4  Specular BRDF
3.4.1  Normal distribution function (specular D)
3.4.2  Geometric shadowing (specular G)
3.4.3  Fresnel (specular F)
3.5  Diffuse BRDF
3.6  Standard model summary
3.7  Parameterization
3.7.1  Standard parameters
3.7.2  Types and ranges
3.7.3  Remapping
3.7.4  Blending and layering
3.7.5  Crafting physically-based materials
3.8  Clear coat model
3.8.1  Clear coat specular BRDF
3.8.2  Integration in the surface response
3.8.3  Clear coat parameterization
3.8.4  Base layer modification
3.9  Anisotropic model
3.9.1  Anisotropic specular BRDF
3.9.2  Anisotropic parameterization
3.10  Subsurface model
3.10.1  Subsurface specular BRDF
3.10.2  Subsurface parameterization
3.11  Cloth model
3.11.1  Cloth specular BRDF
3.11.2  Cloth diffuse BRDF
3.11.3  Cloth parameterization
4  Lighting
4.1  Units
4.1.1  Light units validation
4.2  Direct lighting
4.2.1  Directional lights
4.2.2  Punctual lights
4.2.3  Photometric lights
4.2.4  Area lights
4.2.5  Lights parameterization
4.3  Image based lights
4.3.1  IBL Types
4.3.2  IBL Unit
4.3.3  Processing light probes
4.3.4  Distant light probes
4.3.5  Clear coat
4.3.6  Anisotropy
4.3.7  Subsurface
4.3.8  Cloth
4.4  Static lighting
4.5  Transparency and translucency lighting
4.5.1  Transparency
4.5.2  Translucency
4.6  Occlusion
4.6.1  Diffuse occlusion
4.6.2  Specular occlusion
4.7  Normal mapping
4.7.1  Reoriented normal mapping
4.7.2  UDN blending
5  Volumetric effects
5.1  Exponential height fog
6  Anti-aliasing
7  Imaging pipeline
7.1  Physically-based camera
7.1.1  Exposure settings
7.1.2  Exposure value
7.1.3  Exposure
7.1.4  Automatic exposure
7.1.5  Bloom
7.2  Optics post-processing
7.2.1  Color fringing
7.2.2  Lens flares
7.3  Filmic post-processing
7.3.1  Contrast
7.3.2  Curves
7.3.3  Levels
7.3.4  Color grading
7.4  Light path
7.4.1  Clustered Forward Rendering
7.4.2  Implementation notes
7.5  Validation
7.5.1  Scene referred visualization
7.5.2  Reference renderings
7.6  Coordinates systems
7.6.1  Main coordinates system
7.6.2  Cubemaps cooordinates system
8  Annex
8.1  Importance sampling for the IBL
8.1.1  Choosing important directions
8.1.2  Pre-filtered importance sampling
8.2  Choosing important directions for sampling the BRDF
8.3  Hammersley sequence
8.4  Precomputing L for image-based lighting
8.5  Spherical Harmonics
8.5.1  Basis functions
8.5.2  Decomposition and reconstruction
8.5.3  Decomposition of cosθ⟨cosθ⟩
8.5.4  Convolution
8.6  Sample validation scene for Mistuba
8.7  Light assignment with froxels
9  Bibliography

Overview

Filament is a physically based rendering (PBR) engine for Android. The goal of Filament is to offer a set of tools and APIs for Android developers that will enable them to create high quality 2D and 3D rendering with ease.

The goal of this document is to explain the equations and theory behind the material and lighting models used in Filament. This document is intended as a reference for contributors to Filament or developers interested in the inner workings of the engine. We will provided code snippets as needed to make the relationship between theory and practice as clear as possible.

This document is not intended as a design document. It focuses solely on algorithms and its content could be used to implement PBR in any engine. However, this document explains why we chose specific algorithms/models over others.

Unless noted otherwise, all the 3D renderings present in this document have been generated in-engine (prototype or production). Many of these 3D renderings were captured during the early stages of development of Filament and do not reflect the final quality.

Principles

Real-time rendering is an active area of research and there is a large number of equations, algorithms and implementation to choose from for every single feature that needs to be implemented (the book Rendering real-time shadows, for instance, is a 400 pages summary of dozens of shadows rendering techniques). As such, we must first define our goals (or principles, to follow Brent Burley’s seminal paper Physically-based shading at Disney [Burley12]) before we can make informed decisions.

Real-time mobile performance
Our primary goal is to design and implement a rendering system able to perform efficiently on mobile platforms. The primary target will be OpenGL ES 3.x class GPUs.

Quality
Our rendering system will emphasize overall picture quality. We will however accept quality compromises to support low and medium performance GPUs.

Ease of use
Artists need to be able to iterate often and quickly on their assets and our rendering system must allow them to do so intuitively. We must therefore provide parameters that are easy to understand (for instance, no specular power, no index of refraction…).

We also understand that not all developers have the luxury to work with artists. The physically based approach of our system will allow developers to craft visually plausible materials without the need to understand the theory behind our implementation.

For both artists and developers, our system will rely on as few parameters as possible to reduce trial and error and allow users to quickly master the material model.

In addition, any combination of parameter values should lead to physically plausible results. Physically implausible materials must be hard to create.

Familiarity
Our system should use physical units everywhere possible: distances in meters or centimeters, color temperatures in Kelvin, light units in lumens or candelas, etc.

Flexibility
A physically based approach must not preclude non-realistic rendering. User interfaces for instance will need unlit materials.

Deployment size
While not directly related to the content of this document, it bears emphasizing our desire to keep the rendering library as small as possible so any application can bundle it without increasing the binary to undesirable sizes.

Physically based rendering

We chose to adopt PBR for its benefits from an artistic and production efficient standpoints, and because it is compatible with our goals.

Physically based rendering is a rendering method that provides a more accurate representation of materials and how they interact with light when compared to traditional real-time models. The separation of materials and lighting at the core of the PBR method makes it easier to create realistic assets that look accurate in all lighting conditions.

Notation

The equations found througout this document use the symbols described in table 1.

Symbol Definition
vv View unit vector
ll Incident light unit vector
nn Surface normal unit vector
hh Half unit vector between ll and vv
ff BRDF
fdfd Diffuse component of a BRDF
frfr Specular component of a BRDF
αα Perceptually linear roughness
σσ Diffuse reflectance
ΩΩ Spherical domain
f0f0 Reflectance at normal incidence
f90f90 Reflectance at grazing angle
χ+(a)χ+(a) Heaviside function (1 if a>0a>0 and 0 otherwise)
niornior Index of refraction (IOR) of an interface
nl⟨n⋅l⟩ Dot product clamped to [0..1]
a⟨a⟩ Saturated value (clamped to [0..1])
Table 1: Symbols definitions

Material system

The sections below describe multiple material models to simplify the description of various surface features such as anisotropy or the clear coat layer. In practice however some of these models are condensed into a single one. For instance, the standard model, the clear coat model and the anisotropic model can be combined to form a single, more flexible and powerful model. Please refer to the Materials documentation to get a description of the material models as implemented in Filament.

Standard model

The goal of our model is to represent standard material appearances. A material model is described mathematically by a BSDF (Bidirectional Scattering Distribution Function), which is itself composed of two other functions: the BRDF (Bidirectional Reflectance Distribution Function) and the BTDF (Bidirectional Transmittance Function).

Since we aim to model commonly encountered surfaces, our standard material model will focus on the BRDF and ignore the BTDF, or approximate it greatly. Our standard model will therefore only be able to correctly mimic reflective, isotropic, dielectric or conductive surfaces with short mean free paths.

The BRDF describes the surface response of a standard material as a function made of two terms:

  • A diffuse component, or fdfd
  • A specular component, or frfr

The relationship between a surface, the surface normal, incident light and these terms is shown in figure 1 (we ignore subsurface scattering for now):

Figure 1: Interaction of the light with a surface using BRDF model with a diffuse term fdfdand a specular term frfr

The complete surface response can be expressed as such:

f(v,l)=fd(v,l)+fr(v,l)(1)(1)f(v,l)=fd(v,l)+fr(v,l)

This equation characterizes the surface response for incident light from a single direction. The full rendering equation would require to integrate ll over the entire hemisphere.

Commonly encountered surfaces are usually not made of a flat interface so we need a model that can characterize the interaction of light with an irregular interface.

A microfacet BRDF is a good physically plausible BRDF for that purpose. Such BRDF states that surfaces are not smooth at a micro level, but made of a large number of randomly aligned planar surface fragments, called microfacets. Figure 2 shows the difference between a flat interface and an irregular interface at a micro level:

Figure 2: Irregular interface as modeled by a microfacet model (left) and flat interface (right)

Only the microfacets whose normal is oriented halfway between the light direction and the view direction will reflect visible light, as shown in figure 3.

Figure 3: Microfacets

However, not all microfacets with a properly oriented normal will contribute reflected light as the BRDF takes into account masking and shadowing. This is illustrated in figure 4.

Figure 4: Masking and shadowing of microfacets

A microfacet BRDF is heavily influenced by a roughness parameter which describes how smooth (low roughness) or how rough (high roughness) a surface is at a micro level. The smoother the surface, the more facets are aligned and the more pronounced the reflected light is. The rougher the surface, the fewer facets are oriented towards the camera and incoming light is scattered away from the camera after reflection, giving a blurry aspect to the specular highlights.

Figure 5 shows surfaces of different roughness and how light interacts with them.

Figure 5: Varying roughness (from left to right, rough to smooth) and the resulting BRDF specular component lobe

A microfacet model is described by the following equation (where x stands for the specular or diffuse component):

fx(v,l)=1|nv||nl|ΩD(m,α)G(v,l,m)fm(v,l,m)(vm)(lm)dm(2)(2)fx(v,l)=1|n⋅v||n⋅l|∫ΩD(m,α)G(v,l,m)fm(v,l,m)(v⋅m)(l⋅m)dm

The term DD models the distribution of the microfacets (this term is also referred to as the NDF or Normal Distribution Function). This term plays a primordial role in the appearance of surfaces as shown in figure 5.

The term GG models the visibility (or occlusion or shadow-masking) of the microfacets.

Since this equation is valid for both the specular and diffuse components, the difference lies in the microfacet BRDF fmfm.

It is important to note that this equation is used to integrate over the hemisphere at a micro level:

Figure 6: Modeling the surface response at a single point requires an integration at the micro level

The diagram above shows that at a macro level, the surfaces is considered flat. This helps simplify our equations by assuming that a shaded fragment lit from a single direction corresponds to a single point at the surface.

At a micro level however, the surface is not flat and we cannot assume a single ray of light anymore (we can however assume that the incident rays are parallel). Since the micro facets will scatter the light in different directions given a bundle of parallel incident rays, we must integrate the surface response over a hemisphere, noted m in the above diagram.

It is obviously not practical to compute the full integration over the microfacets hemisphere for each shaded fragment. We will therefore rely on approximations of the integration for both the specular and diffuse components.

Dielectrics and conductors

To better understand some of the equations and behaviors shown below, we must first clearly understand the difference between metallic (conductor) and non-metallic (dielectric) surfaces.

We saw earlier that when incident light hits a surface governed by a BRDF, the light is reflected as two separate components: the diffuse reflectance and the specular reflectance. The modelization of this behavior is straightforward as shown in figure 7.

Figure 7: Modelization of the BRDF part of a BSDF

This modelization is a simplification of how the light actually interacts with the surface. In reality, part of the incident light will penetrate the surface, scatter inside, and exit the surface again as diffuse reflectance. This phenomenon is illustrated in figure 8.

Figure 8: Scattering of diffuse light

Here lies the difference between conductors and dielectrics. There is no subsurface scattering occurring with purely metallic materials, which means there is no diffuse component (and we will see later that this has an influence on the perceived color of the specular component). Scattering happens in dielectrics, which means they have both specular and diffuse components.

To properly modelize the BRDF we must therefore distinguish between dielectrics and conductors (scattering not shown for clarity), as shown in figure 9.

Figure 9: BRDF modelization for dielectric and conductor surfaces

Energy conservation

Energy conservation is one of the key components of a good BRDF for physically based rendering. An energy conservative BRDF states that the total amount of specular and diffuse reflectance energy is less than the total amount of incident energy. Without an energy conservative BRDF, artists must manually ensure that the light reflected off a surface is never more intense than the incident light.

Specular BRDF

For the specular term, fmfm is a mirror BRDF that can be modeled with the Fresnel law, noted FFin the Cook-Torrance approximation of the microfacet model integration:

fr(v,l)=D(h,α)G(v,l,α)F(v,h,f0)4(nv)(nl)(3)(3)fr(v,l)=D(h,α)G(v,l,α)F(v,h,f0)4(n⋅v)(n⋅l)

Given our real-time constraints, we must use an approximation for the three terms DDGG and FF. [Karis13] has compiled a great list of formulations for these three terms that can be used with the Cook-Torrance specular BRDF. The sections that follow describe the equations we picked for these terms.

Normal distribution function (specular D)

[Burley12] observed that long-tailed normal distribution functions (NDF) are a good fit for real-world surfaces. The GGX distribution described in [Walter07] is a distribution with long-tailed falloff and short peak in the highlights, with a simple formulation suitable for real-time implementations. It is also a popular model, equivalent to the Trowbridge-Reitz distribution, in modern physically based renderers.

DGGX(h,α)=α2π((nh)2(α21)+1)2(4)(4)DGGX(h,α)=α2π((n⋅h)2(α2−1)+1)2

The GLSL implementation of the NDF, shown in listing 1, is simple and efficient.

float D_GGX(float NoH, float linearRoughness) {
    float a2 = linearRoughness * linearRoughness;
    float f = (NoH * a2 - NoH) * NoH + 1.0;
    return a2 / (PI * f * f);
}
Listing 1: Implementation of the specular D term in GLSL

Geometric shadowing (specular G)

Eric Heitz showed in [Heitz14] that the Smith geometric shadowing function is the correct and exact GG term to use. The Smith formulation is the following:

G(v,l,α)=G1(l,α)G1(v,α)(5)(5)G(v,l,α)=G1(l,α)G1(v,α)

G1G1 can in turn follow several models, and is commonly set to he GGX formulation:

G1(v,α)=GGGX(v,α)=2(nv)nv+α2+(1α2)(nv)2−−−−−−−−−−−−−−−−√(6)(6)G1(v,α)=GGGX(v,α)=2(n⋅v)n⋅v+α2+(1−α2)(n⋅v)2

The full Smith-GGX formulation thus becomes:

G(v,l,α)=2(nl)nl+α2+(1α2)(nl)2−−−−−−−−−−−−−−−−√2(nv)nv+α2+(1α2)(nv)2−−−−−−−−−−−−−−−−√(7)(7)G(v,l,α)=2(n⋅l)n⋅l+α2+(1−α2)(n⋅l)22(n⋅v)n⋅v+α2+(1−α2)(n⋅v)2

We can observe that the dividends 2(nl)2(n⋅l) and 2(nv)2(n⋅v) allow us to simplify the original function frfr by introducing a visibility function VV:

fr(v,l)=D(h,α)V(v,l,α)F(v,h,f0)(8)(8)fr(v,l)=D(h,α)V(v,l,α)F(v,h,f0)

Where:

V(v,l,α)=G(v,l,α)4(nv)(nl)=V1(l)V1(v)(9)(9)V(v,l,α)=G(v,l,α)4(n⋅v)(n⋅l)=V1(l)V1(v)

And:

V1(v,α)=1nv+α2+(1α2)(nv)2−−−−−−−−−−−−−−−−√(10)(10)V1(v,α)=1n⋅v+α2+(1−α2)(n⋅v)2

Heitz notes however that taking the height of the microfacets into account to correlate masking and shadowing leads to more accurate results. He defines the height-correlated Smith function thusly:

G(v,l,h,α)=χ+(vh)χ+(lh)1+Λ(v)+Λ(l)(11)(11)G(v,l,h,α)=χ+(v⋅h)χ+(l⋅h)1+Λ(v)+Λ(l)
Λ(m)=1+1+α2tan2(θm)−−−−−−−−−−−−√2=1+1+α2(1cos2(θm))cos2(θm)−−−−−−−−−−−−−√2(12)(12)Λ(m)=−1+1+α2tan2(θm)2=−1+1+α2(1−cos2(θm))cos2(θm)2

Replacing θmθm by nvn⋅v, we obtain:

Λ(v)=12(α2+(1α2)(nv)2−−−−−−−−−−−−−−−−√nv1)(13)(13)Λ(v)=12(α2+(1−α2)(n⋅v)2n⋅v−1)

From which we can derive the visibility function:

V(v,l,α)=0.5nl(nv)2(1α2)+α2−−−−−−−−−−−−−−−−√+nv(nl)2(1α2)+α2−−−−−−−−−−−−−−−−√(14)(14)V(v,l,α)=0.5n⋅l(n⋅v)2(1−α2)+α2+n⋅v(n⋅l)2(1−α2)+α2

The GLSL implementation of the visibility term, shown in listing 2, is a bit more expensive than we would like since it requires two sqrt operations.

float V_SmithGGXCorrelated(float NoV, float NoL, float linearRoughness) {
    float a2 = linearRoughness * linearRoughness;
    float GGXV = NoL * sqrt(NoV * NoV * (1.0 - a2) + a2);
    float GGXL = NoV * sqrt(NoL * NoL * (1.0 - a2) + a2);
    return 0.5 / (GGXV + GGXL);
}
Listing 2: Implementation of the specular V term in GLSL

We can optimize this visibility function by using an approximation after noticing that all the terms under the square roots are squares and that all the terms are in the [0..1][0..1] range:

V(v,l,α)=0.5nl(nv(1α)+α)+nv(nl(1α)+α)(15)(15)V(v,l,α)=0.5n⋅l(n⋅v(1−α)+α)+n⋅v(n⋅l(1−α)+α)

This approximation is mathematically wrong but saves two square root operations and is good enough for real-time mobile applications, as shown in listing 3.

float V_SmithGGXCorrelatedFast(float NoV, float NoL, float linearRoughness) {
    float a = linearRoughness;
    float GGXV = NoL * (NoV * (1.0 - a) + a);
    float GGXL = NoV * (NoL * (1.0 - a) + a);
    return 0.5 / (GGXV + GGXL);
}
Listing 3: Implementation of the approximated specular V term in GLSL

[Hammon17] proposes the same approximation based on the same observation that the square root can be removed. It does so by rewriting the expressions as lerps:

V(v,l,α)=0.5lerp(2(nl)(nv),nl+nv,α)(16)(16)V(v,l,α)=0.5lerp(2(n⋅l)(n⋅v),n⋅l+n⋅v,α)

Fresnel (specular F)

The Fresnel term defines how light reflects and refracts at the interface between two different media. [Schlick94] describes an inexpensive approximation of the Fresnel term for the Cook-Torrance specular BRDF:

FSchlick(v,h,f0,f90)=f0+(f90f0)(1vh)5(17)(17)FSchlick(v,h,f0,f90)=f0+(f90−f0)(1−v⋅h)5

The constant f0f0 represents the specular reflectance at normal incidence and is achromatic for dielectrics, and chromatic for metals. The actual value depends on the index of refraction of the interface. The GLSL implementation of this term requires the use of a pow, as shown in listing 4, which can be replaced by a few multiplications.

vec3 F_Schlick(float VoH, vec3 f0, float f90) {
    return f0 + (vec3(f90) - f0) * pow(1.0 - VoH, 5.0);
}
Listing 4: Implementation of the specular F term in GLSL

This Fresnel function can be seen as interpolating between the incident specular reflectance and the reflectance at grazing angles, represented here by f90f90. Observation of real world materials show that both dielectrics and conductors exhibit achromatic specular reflectance at grazing angles and that the Fresnel reflectance is 1.0 at 90°. A more correct f90f90 is discussed in section4.6.2.

Using f90f90 set to 1, the Schlick approximation for the Fresnel term can be optimized for scalar operations by refactoring the code slightly. The result is shown in listing 5.

vec3 F_Schlick(float VoH, vec3 f0) {
    float f = pow(1.0 - VoH, 5.0);
    return f + f0 * (1.0 - f);
}
Listing 5: Scalar optimization of the specular F term in GLSL

Diffuse BRDF

In the diffuse term, fmfm is a Lambertian function and the diffuse term of the BRDF becomes:

fd(v,l)=σπ1|nv||nl|ΩD(m,α)G(v,l,m)(vm)(lm)dm(18)(18)fd(v,l)=σπ1|n⋅v||n⋅l|∫ΩD(m,α)G(v,l,m)(v⋅m)(l⋅m)dm

Our implementation will instead use a simple Lambertian BRDF that assumes a uniform diffuse response over the microfacets hemisphere:

fd(v,l)=σπ(19)(19)fd(v,l)=σπ

In practice, the diffuse reflectance σσ is multiplied later, as shown in listing 7.

float Fd_Lambert() {
    return 1.0 / PI;
}

vec3 Fd = diffuseColor * Fd_Lambert();
Listing 6: Implementation of the diffuse Lambertian BRDF in GLSL

The Lambertian BRDF is obviously extremely efficient and delivers results close enough to more complex models.

However, the diffuse part would ideally be coherent with the specular term and take into account the surface roughness. Both the Disney diffuse BRDF [Burley12] and Oren-Nayar model [Oren94] take the roughness into account and create some retro-reflection at grazing angles. Given our constraints we decided that the extra runtime cost does not justify the slight increase in quality. This sophisticated diffuse model also renders image-based and spherical harmonics more difficult to express and implement.

For completeness, the Disney diffuse BRDF expressed in [Burley12] is the following:

fd(v,l)=σπFSchlick(n,l,1,f90)FSchlick(n,v,1,f90)(20)(20)fd(v,l)=σπFSchlick(n,l,1,f90)FSchlick(n,v,1,f90)

Where:

f90=0.5+2αcos2(θd)(21)(21)f90=0.5+2⋅αcos2(θd)

It is important to note that the roughness used in this formula is the perceptually linear roughness (more on this in section 3.7).

float F_Schlick(float VoH, float f0, float f90) {
    return f0 + (f90 - f0) * pow(1.0 - VoH, 5.0);
}

float Fd_Burley(float NoV, float NoL, float LoH, float linearRoughness) {
    float f90 = 0.5 + 2.0 * linearRoughness * LoH * LoH;
    float lightScatter = F_Schlick(NoL, 1.0, f90);
    float viewScatter = F_Schlick(NoV, 1.0, f90);
    return lightScatter * viewScatter * (1.0 / PI);
}
Listing 7: Implementation of the diffuse Disney BRDF in GLSL

Figure 10 shows a comparison between a simple Lambertian diffuse BRDF and the higher quality Disney diffuse BRDF, using a fully rough dielectric material. For comparison purposes, the right sphere was mirrored. The surface response is very similar with both BRDFs but the Disney one exhibits some nice retro-reflections at grazing angles (look closely at the left edge of the spheres).

Figure 10: Comparison between the Lambertian diffuse BRDF (left) and the Disney diffuse BRDF (right)

We could allow artists/developers to choose the Disney diffuse BRDF depending on the quality they desire and the performance of the target device. It is important to note however that the Disney diffuse BRDF is not energy conserving as expressed here.

Standard model summary

Specular term: a Cook-Torrance specular microfacet model, with a GGX normal distribution function, a Smith-GGX height-correlated visibility function, and a Schlick Fresnel function.

Diffuse term: a Lambertian diffuse model.

The full GLSL implementation of the standard model is shown in listing 8.

float D_GGX(float NoH, float a) {
    float a2 = a * a;
    float f = (NoH * a2 - NoH) * NoH + 1.0;
    return a2 / (PI * f * f);
}

vec3 F_Schlick(float VoH, vec3 f0) {
    return f0 + (vec3(1.0) - f0) * pow(1.0 - VoH, 5.0);
}

float V_SmithGGXCorrelated(float NoV, float NoL, float a) {
    float a2 = a * a;
    float GGXL = NoV * sqrt((-NoL * a2 + NoL) * NoL + a2);
    float GGXV = NoL * sqrt((-NoV * a2 + NoV) * NoV + a2);
    return 0.5 / (GGXV + GGXL);
}

float Fd_Lambert() {
    return 1.0 / PI;
}

void BRDF(...) {
    vec3 h = normalize(v + l);

    float NoV = abs(dot(n, v)) + 1e-5;
    float NoL = clamp(dot(n, l), 0.0, 1.0);
    float NoH = clamp(dot(n, h), 0.0, 1.0);
    float LoH = clamp(dot(l, h), 0.0, 1.0);

    // perceptually linear roughness (see parameterization)
    float a = roughness * roughness;

    float D = D_GGX(NoH, a);
    vec3  F = F_Schlick(LoH, f0);
    float V = V_SmithGGXCorrelated(NoV, NoL, a);

    // specular BRDF
    vec3 Fr = (D * V) * F;

    // diffuse BRDF
    vec3 Fd = diffuseColor * Fd_Lambert();

    // apply lighting...
}
Listing 8: Evaluation of the BRDF in GLSL

Parameterization

Disney’s material model described in [Burley12] is a good starting point but its numerous parameters makes it impractical for real-time implementations. In addition, we would like our standard material model to be easy to understand and easy to use for both artists and developers.

Standard parameters

Table 2 describes the list of parameters that satisfy our constraints.

Parameter Definition
BaseColor Diffuse albedo for non-metallic surfaces, and specular color for metallic surfaces
Metallic Whether a surface appears to be dielectric (0.0) or conductor (1.0). Often used as a binary value (0 or 1)
Roughness Perceived smoothness (1.0) or roughness (0.0) of a surface. Smooth surfaces exhibit sharp reflections
Reflectance Fresnel reflectance at normal incidence for dielectric surfaces. This replaces an explicit index of refraction
Emissive Additional diffuse albedo to simulate emissive surfaces (such as neons, etc.) This parameter is mostly useful in an HDR pipeline with a bloom pass
Ambient occlusion Defines how much of the ambient light is accessible to a surface point. It is a per-pixel shadowing factor between 0.0 and 1.0. This parameter will be discussed in more details in the lighting section
Table 2: Parameters of the standard model

Figure 11 shows how the metallic, roughness and reflectance parameters affect the appearance of a surface.

Figure 11: From top to bottom: varying metallic, varying dielectric roughness, varying metallic roughness, varying reflectance

Types and ranges

It is important to understand the type and range of the different parameters of our material model, described in table 3.

Parameter Type and range
BaseColor Linear RGB [0..1]
Metallic Scalar [0..1]
Roughness Scalar [0..1]
Reflectance Scalar [0..1]
Emissive Linear RGB [0..1] + exposure compensation
Ambient occlusion Scalar [0..1]
Table 3: Range and type of the standard model’s parameters

Note that the types and ranges described here are what the shader will expect. The API and/or tools UI could and should allow to specify the parameters using other types and ranges when they are more intuitive for artists.

For instance, the base color could be expressed in sRGB space and converted to linear space before being sent off to the shader. It can also be useful for artists to express the metallic, roughness and reflectance parameters as gray values between 0 and 255 (black to white).

Another example: the emissive parameter could be expressed as a color temperature and an intensity, to simulate the light emitted by a black body.

Remapping

To make the standard material model easier and more intuitive to use for artists, we must remap the parameters baseColorroughness and reflectance.

Base color remapping

The base color of a material is affected by the “metallicness” of said material. Dielectrics have achromatic specular reflectance but retain their base color as the diffuse color. Conductors on the other hand use their base color as the specular color and do not have a diffuse component.

The lighting equations must therefore use the diffuse color and f0f0 instead of the base color. The diffuse color can easily be computed from the base color, as show in listing 9.

vec3 diffuseColor = (1.0 - metallic) * baseColor.rgb;
Listing 9: Conversion of base color to diffuse in GLSL

Reflectance remapping

Dielectrics

The Fresnel term relies on f0f0, the specular reflectance at normal incidence angle, and is achromatic for dielectrics. We will use the remapping for dielectric surfaces described in [Lagarde14] :

f0=0.16reflectance2(22)(22)f0=0.16⋅reflectance2

The goal is to map f0f0 onto a range that can represent the Fresnel values of both common dielectric surfaces (4% reflectance) and gemstones (8% to 16%). The mapping function is chosen to yield a 4% Fresnel reflectance value for an input reflectance of 0.5 (or 128 on a linear RGB gray scale). Figure 12 show those common values and how they relate to the mapping function.

Figure 12: Common reflectance values

If the index of refraction is known (for instance, an air-water interface has an IOR of 1.33), the Fresnel reflectance can be calculated as follows:

f0(nior)=(nior1)2(nior+1)2(23)(23)f0(nior)=(nior−1)2(nior+1)2

And if the reflectance value is known, we can compute the corresponding IOR:

nior=21f0−−√1(24)(24)nior=21−f0−1

Table 4 describes acceptable Fresnel reflectance values for various types of materials (no real world material has a value under 2%).

Material Reflectance
Glass 3.5%
Water 2%
Common liquids 2% to 4%
Common gemstones 5% to 16%
Other dielectric materials 2% to 5%
Default value 4%
Table 4: Reflectance of common materials

Table 5 lists the f0f0 values for a few metals. The values are given in sRGB and must be used as the base color in our material model:

Metal f0f0 in sRGB Hexadecimal Color
Silver 0.97, 0.96, 0.91 #f7f4e8
Aluminum 0.91, 0.92, 0.92 #e8eaea
Titanium 0.76, 0.73, 0.69 #c1baaf
Iron 0.77, 0.78, 0.78 #c4c6c6
Platinum 0.83, 0.81, 0.78 #d3cec6
Gold 1.00, 0.85, 0.57 #ffd891
Brass 0.98, 0.90, 0.59 #f9e596
Copper 0.97, 0.74, 0.62 #f7bc9e
Table 5: f0f0 for common metals

All materials have a Fresnel reflectance of 100% at grazing angles so we will set f90f90 in the following way when evaluating the specular BRDF frfr:

f90=1.0(25)(25)f90=1.0

Figure 13 shows a red plastic ball. If you look closely at the edges of the sphere, you will be able to notice the achromatic specular reflectance at grazing angles.

Figure 13: The specular reflectance becomes achromatic at grazing angles

Conductors

The specular reflectance of metallic surfaces is chromatic:

f0=baseColormetallic(26)(26)f0=baseColor⋅metallic

Listing 10 shows how f0f0 is computed for both dielectric and metallic materials. It shows that the color of the specular reflectance is derived from the base color in the metallic case.

vec3 f0 = 0.16 * reflectance * reflectance * (1.0 - metallic) + baseColor * metallic;
Listing 10: Computing f0f0 for dielectric and metallic materials in GLSL

Roughness remapping and clamping

The roughness is remapped to a perceptually linear range using the following formulation:

α=roughness2(27)(27)α=roughness2

Figure 14 shows a silver metallic surface with increasing roughness (from 0.0 to 1.0), using the unmodified roughness value (bottom) and the perceptually linear roughness value (top).

Figure 14: Roughness remapping comparison: perceptually linear roughness (top) and roughness (bottom)

Using this visual comparison, it is obvious that the remapped roughness is easier to understand by artists and developers. Without this remapping, shiny metallic surfaces would have to be confined to a very small range between 0.0 and 0.05.

Brent Burley made similar observations in his presentation [Burley12]. After experimenting with other remappings (cubic and quadratic mappings for instance), we have reached the conclusion that this simple square remapping delivers visually pleasing and intuitive results while being cheap for real-time applications.

Last but not least, it is important to note that the roughness parameters is used in various computations at runtime where limited floating point precision can become an issue. For instance, mediump precision floats are often implemented as half-floats (fp16) on mobile GPUs.

This cause problems when computing small values like 1roughness41roughness4 in our lighting equations (perceptually linear roughness squared in the GGX computation). The smallest value that can be represented as a half-float is 2142−14 or 6.1×1056.1×10−5. To avoid divisions by 0 on devices that do not support denormals, the result of 1roughness41roughness4 must therefore not be lower than 6.1×1056.1×10−5. To do so, we must clamp the roughness to 0.089, which gives us 6.274×1056.274×10−5.

Denormals should also be avoided to prevent performance drops. The roughness can also not be set to 0 to avoid obvious divisions by 0.

Since we also want specular highlights to have a minimum size (a roughness close to 0 creates almost invisible highlights), we should clamp the roughness to a safe range in the shader. This clamping has the added benefit of correcting specular aliasing1 that can appear for low roughness values.

1 The Frostbite engine clamps the roughness of analytical lights to 0.045 to reduce specular aliasing. This is possible when using single precision floats (fp32).

Blending and layering

As noted in [Burley12] and [Neubelt13], this model allows for robust blending between different materials by simply interpolating the different parameters. In particular, this allows to layer different materials using simple masks.

For instance, figure 15 shows how the studio Ready at Dawn used material blending and layering in The Order: 1886 to create complex appearances from a library of simple materials (gold, copper, wood, rust, etc.).

Figure 15: Material blending and layering. Source: Ready at Dawn Studios

The blending and layering of materials is effectively an interpolation of the various parameters of the material model. Figure 16 show an interpolation between shiny metallic chrome and rough red plastic. While the intermediate blended materials make little physical sense, they look plausible.

Figure 16: Interpolation from shiny chrome (left) to rough red plastic (right)

Crafting physically-based materials

Designing physically-based materials is fairly easy once you understand the nature of the four main parameters: base color, metallic, roughness and reflectance.

We provide a useful chart/reference guide to help artists and developers craft their own physically-based materials.

Crafting physically-based materials

In addition, here is a quick summary of how to use our material model:

All materials
Base color should be devoid of lighting information, except for micro-occlusion.

Metallic is almost a binary value. Pure conductors have a metallic value of 1 and pure dielectrics have a metallic value of 0. You should try to use values close at or close to 0 and 1. Interemdiate values are meant for transitions between surface types (metal to rust for instance).

Non-metallic materials
Base color represents the reflected color and should be an sRGB value in the range 50-240 (strict range) or 30-240 (tolerant range).

Metallic should be 0 or close to 0.

Reflectance should be set to 127 sRGB (0.5 linear, 4% reflectance) if you cannot find a proper value. Do not use values under 90 sRGB (0.35 linear, 2% reflectance).

Metallic materials
Base color represents both the specular color and reflectance. Use values with a luminosity of 67% to 100% (170-255 sRGB). Oxidized or dirty metals should use a lower luminosity than clean metals to take into account the non-metallic components.

Metallic should be 1 or close to 1.

Reflectance is ignored (calculated from the base color).

Clear coat model

The standard material model described previously is a good fit for isotropic surfaces made of a single layer. Multi-layer materials are unfortunately fairly common, particularly materials with a thin translucent layer over a standard layer. Real world examples of such materials include car paints, soda cans, lacquered wood, acrylic, etc.

Figure 17: Comparison of a blue metallic surface under the standard material model (left) and the clear coat model (right)

A clear coat layer can be simulated as an extension of the standard material model by adding a second specular lobe, which implies evaluating a second specular BRDF. To simplify the implementation and parameterization, the clear coat layer will always be isotropic and dielectric. The base layer can be anything allowed by the standard model (dielectric or conductor).

Since incoming light will traverse the clear coat layer, we must also take the loss of energy into account as shown in figure 18. Our model will however not simulate inter reflection and refraction behaviors.

Figure 18: Clear coat surface model

Clear coat specular BRDF

The clear coat layer will be modeled using the same Cook-Torrance microfacet BRDF used in the standard model. Since the clear coat layer is always isotropic and dielectric, with low roughness values (see section 3.8.3), we can choose cheaper DFG terms without notably sacrificing visual quality.

A survey of the terms listed in [Karis13] and [Burley12] shows that the Fresnel and NDF terms we already use in the standard model are not computationally more expensive than other terms. [Kelemen01] describes a much simpler term that can replace our Smith-GGX visibility term:

V(l,h)=14(lh)2(28)(28)V(l,h)=14(l⋅h)2

This masking-shadowing function is not physically based, as shown in [Heitz14], but its simplicity makes it desirable for real-time rendering.

In summary, our clear coat BRDF is a Cook-Torrance specular microfacet model, with a GGX normal distribution function, a Kelemen visibility function, and a Schlick Fresnel function. Listing 11 shows how trivial the GLSL implementation is.

float V_Kelemen(float LoH) {
    return 0.25 / (LoH * LoH);
}
Listing 11: Implementation of the Kelemen visibility term in GLSL

Note on the Fresnel term

The Fresnel term of the specular BRDF requires f0f0, the specular reflectance at normal incidence angle. This parameter can be computed from an index of refraction of an interface. We will assume that our clear coat layer is made of polyurethane, a common compound used in coatings and varnishes, or similar. An air-polyurethane interface has an IOR of 1.5, from which we can deduce f0f0:

f0(1.5)=(1.51)2(1.5+1)2=0.04(29)(29)f0(1.5)=(1.5−1)2(1.5+1)2=0.04

This corresponds to a Fresnel reflectance of 4% that we know is associated with common dielectric materials.

Integration in the surface response

Because we must take into account the loss of energy caused by the addition of the clear coat layer, we can reformulate the BRDF from equation 11 thusly:

f(v,l)=fd(n,l)(1Fc)+fr(n,l)(1Fc)2+fc(n,l)(30)(30)f(v,l)=fd(n,l)(1−Fc)+fr(n,l)(1−Fc)2+fc(n,l)

Where FcFc is the Fresnel term of the clear coat BRDF and fcfc the clear coat BRDF. The multiplication by (1Fc)2(1−Fc)2 of the specular component is to remain energy conservative as the light enters and exists the clear coat layer. The multiplication by 1Fc1−Fc of the diffuse component is an attempt at energy conservation.

Clear coat parameterization

The clear coat material model encompasses all the parameters previously defined for the standard material mode, plus two parameters described in table 6.

Parameter Definition
ClearCoat Strength of the clear coat layer. Scalar between 0 and 1
ClearCoatRoughness Perceived smoothness or roughness of the clear coat layer. Scalar between 0 and 1
Table 6: Clear coat model parameters

The clear coat roughness parameter is remapped and clamped in a similar way to the roughness parameter of the standard material. The main difference is that we want to lower the clear coat roughness range from [0..1] to the smaller [0..0.6] range. This remapping is arbitrary but matches the fact that clear coat layers are almost always glossy. The remapped value is squared to produce a perceptually linear roughness value.

Figure 19 and figure 20 show how the clear coat parameters affect the appearance of a surface.

Figure 19: Clear coat varying from 0.0 (left) to 1.0 (right) with metallic set to 1.0 and roughness to 0.8

Figure 20: Clear coat roughness varying from 0.0 (left) to 1.0 (right) with metallic set to 1.0, roughness to 0.8 and clear coat to 1.0

Listing 12 shows the GLSL implementation of the clear coat material model after remapping, parameterization and integration in the standard surface response.

void BRDF(...) {
    // compute Fd and Fr from standard model

    // remapping and linearization of clear coat roughness
    clearCoatRoughness = mix(0.089, 0.6, clearCoatRoughness);
    clearCoatLinearRoughness = clearCoatRoughness * clearCoatRoughness;

    // clear coat BRDF
    float  Dc = D_GGX(clearCoatLinearRoughness, NoH);
    float  Vc = V_Kelemen(clearCoatLinearRoughness, LoH);
    float  Fc = F_Schlick(0.04, LoH) * clearCoat; // clear coat strength
    float Frc = (Dc * Vc) * Fc;

    // account for energy loss in the base layer
    return color * ((Fd + Fr * (1.0 - Fc)) * (1.0 - Fc) + Frc);
}
Listing 12: Implementation of the clear coat BRDF in GLSL

Base layer modification

The presence of a clear coat layer means that we should recompute f0f0, since it is normally based on an air-material interface. The base layer thus requires f0f0 to be computed based on a clear coat-material interface instead.

This can be achieved by computing the material’s index of refraction (IOR) from f0f0, then computing a new f0f0 based on the newly computed IOR and the IOR of the clear coat layer (1.5).

First, we compute the base layer’s IOR:

IORbase=1+f0−−√1f0−−√IORbase=1+f01−f0

Then we compute the new f0f0 from this new index of refraction:

f0base=(IORbase1.5IORbase+1.5)2f0base=(IORbase−1.5IORbase+1.5)2

Since the clear coat layer’s IOR is fixed, we can combine both steps to simplify:

f0base=(15f0−−√)2(5f0−−√)2f0base=(1−5f0)2(5f0)2

We should also modify the base layer’s apparent roughness based based on the IOR of the clear coat layer but this is something we have opted to leave out for now.

Anisotropic model

The standard material model described previously can only describe isotropic surfaces, that is, surfaces whose properties are identical in all directions. Many real-world materials, such as brushed metal, can, however, only be replicated using an anisotropic model.

Figure 21: Comparison of isotropic material (left) and anistropic material (right)

Anisotropic specular BRDF

The isotropic specular BRDF described previously can be modified to handle anisotropic materials. Burley achieves this by using an anisotropic GGX NDF:

Daniso(h,α)=1παtαb1((thαt)2+(bhαb)2+(nh)2)2(31)(31)Daniso(h,α)=1παtαb1((t⋅hαt)2+(b⋅hαb)2+(n⋅h)2)2

This NDF unfortunately relies on two supplemental roughness terms noted αbαb, the roughness along the bitangent direction, and αtαt, the roughness along the tangent direction. Neubelt and Pettineo [Neubelt13] propose a way to derive αbαb from αtαt by using an anisotropy parameter that describes the relationship between the two roughness values for a material:

αtαb=α=lerp(0,α,1anisotropy)αt=ααb=lerp(0,α,1−anisotropy)

The relationship defined in [Burley12] is different, offers more pleasant and intuitive results, but is slightly more expensive:

αtαb=α10.9×anisotropy−−−−−−−−−−−−−−−−√=α10.9×anisotropy−−−−−−−−−−−−−−−−√αt=α1−0.9×anisotropyαb=α1−0.9×anisotropy

We instead opted to follow the relationship described in [Kulla17] as it allows creation of sharp highlights:

αtαb=α×(1+anisotropy)=α×(1anisotropy)αt=α×(1+anisotropy)αb=α×(1−anisotropy)

Note that this NDF requires the tangent and bitangent directions in addition to the normal direction. Since these directions are already needed for normal mapping, providing them may not be an issue.

The resulting implementation is described in listing 13.

float at = max(linearRoughness * (1.0 + anisotropy), 0.001);
float ab = max(linearRoughness * (1.0 - anisotropy), 0.001);

float D_GGX_Anisotropic(float NoH, const vec3 h,
        const vec3 t, const vec3 b, float at, float ab) {
    float ToH = dot(t, h);
    float BoH = dot(b, h);
    float a2 = at * ab;
    vec3 v = vec3(ab * ToH, at * BoH, a2 * NoH);
    return a2 * sqr(a2 / dot(v, v)) * (1.0 / PI);
}
Listing 13: Implementation of Burley’s anisotropic NDF in GLSL

In addition, [Heitz14] presents an anisotropic masking-shadowing function to match the height-correlated GGX distribution. The masking-shadowing term can be greatly simplified by using the visibility function instead:

G(v,l,h,α)=χ+(vh)χ+(lh)1+Λ(v)+Λ(l)(32)(32)G(v,l,h,α)=χ+(v⋅h)χ+(l⋅h)1+Λ(v)+Λ(l)
Λ(m)=1+1+α20tan2(θm)−−−−−−−−−−−−√2=1+1+α20(1cos2(θm))cos2(θm)−−−−−−−−−−−−−√2(33)(33)Λ(m)=−1+1+α02tan2(θm)2=−1+1+α02(1−cos2(θm))cos2(θm)2

Where:

α0=cos2(ϕ0)α2x+sin2(ϕ0)α2y−−−−−−−−−−−−−−−−−−−√(34)(34)α0=cos2(ϕ0)αx2+sin2(ϕ0)αy2

After derivation we obtain:

Vaniso(nl,nv,α)=12((nl)Λ^v+(nv)Λ^l)Λ^v=α2t(tv)2+α2b(bv)2+(nv)2−−−−−−−−−−−−−−−−−−−−−−−−√Λ^l=α2t(tl)2+α2b(bl)2+(nl)2−−−−−−−−−−−−−−−−−−−−−−−√(35)(35)Vaniso(n⋅l,n⋅v,α)=12((n⋅l)Λ^v+(n⋅v)Λ^l)Λ^v=αt2(t⋅v)2+αb2(b⋅v)2+(n⋅v)2Λ^l=αt2(t⋅l)2+αb2(b⋅l)2+(n⋅l)2

The term Λ^vΛ^v is the same for every light and can be computed only once if needed. The resulting implementation is described in listing 14.

float at = max(linearRoughness * (1.0 + anisotropy), 0.001);
float ab = max(linearRoughness * (1.0 - anisotropy), 0.001);

float V_SmithGGXCorrelated_Anisotropic(float at, float ab, float ToV, float BoV,
        float ToL, float BoL, float NoV, float NoL) {
    float lambdaV = NoL * length(vec3(at * ToV, ab * BoV, NoV));
    float lambdaL = NoV * length(vec3(at * ToL, ab * BoL, NoL));
    float v = 0.5 / (lambdaV + lambdaL);
    return saturateMediump(v);
}
Listing 14: Implementation of the anisotropic visibility function in GLSL

Anisotropic parameterization

The anisotropic material model encompasses all the parameters previously defined for the standard material mode, plus an extra parameter described in table 7.

Parameter Definition
Anisotropy Amount of anisotropy. Scalar between −1 and 1
Table 7: Anisotropic model parameters

No further remapping is required. Note that negative values will align the anisotropy with the bitangent direction instead of the tangent direction. Figure 22 shows how the anisotropy parameter affect the appearance of a rough metallic surface.

Figure 22: Anisotropy varying from 0.0 (left) to 1.0 (right)

Subsurface model

[TODO]

Subsurface specular BRDF

[TODO]

Subsurface parameterization

[TODO]

Cloth model

All the material models described previously are designed to simulate dense surfaces, both at a macro and at a micro level. Clothes and fabrics are however often made of loosely connected threads that absorb and scatter incident light. The microfacet BRDFs presented earlier do a poor job of recreating the nature of cloth due to their underlying assumption that a surface is made of random grooves that behave as perfect mirrors. When compared to hard surfaces, cloth is characterized by a softer specular lobe with a large falloff and the presence of fuzz lighting, caused by forward/backward scattering. Some fabrics also exhibit two-tone specular colors (velvets for instance).

Figure 23 shows how a traditional microfacet BRDF fails to capture the appearance of a sample of denim fabric. The surface appears rigid (almost plastic-like), more similar to a tarp than a piece of clothing. This figure also shows how important the softer specular lobe caused by absorption and scattering is to the faithful recreation of the fabric.

Figure 23: Comparison of denim fabric rendered using a traditional microfacet BRDF (left) and our cloth BRDF (right)

Velvet is an interesting use case for a cloth material model. As shown in figure 24 this type of fabric exhibits strong rim lighting due to forward and backward scattering. These scattering events are caused by fibers standing straight at the surface of the fabric. When the incident light comes from the direction opposite to the view direction, the fibers will forward-scatter the light. Similarly, when the incident light from from the same direction as the view direction, the fibers will scatter the light backward.

Figure 24: Velvet fabric showcasing forward and backward scattering

Since fibers are flexible, we should in theory model the ability to groom the surface. While our model does not replicate this characteristic, it does model a visible front facing specular contribution that can be attributed to the random variance in the direction of the fibers.

It is important to note that there are types of fabrics that are still best modeled by hard surface material models. For instance, leather, silk and satin can be recreated using the standard or anisotropic material models.

Cloth specular BRDF

The cloth specular BRDF we use is a modified microfacet BRDF as described by Ashikhmin and Premoze in [Ashikhmin07]. In their work, Ashikhmin and Premoze note that the distribution term is what contributes most to a BRDF and that the shadowing/masking term is not necessary for their velvet distribution. The distribution term itself is an inverted Gaussian distribution. This helps achieve fuzz lighting (forward and backward scattering) while an offset is added to simulate the front facing specular contribution. The so-called velvet NDF is defined as follows:

Dvelvet(v,h,α)=cnorm(1+4exp(cot2θhα2))(36)(36)Dvelvet(v,h,α)=cnorm(1+4exp(−cot2θhα2))

This NDF is a variant of the NDF the same authors describe in [Ashikhmin00], notably modified to include an offset (set to 1 here) and an amplitude (4). In [Neubelt13], Neubelt and Pettineo propose a normalized version of this NDF:

Dvelvet(v,h,α)=1π(1+4α2)(1+4exp(cot2θhα2)sin4θh)(37)(37)Dvelvet(v,h,α)=1π(1+4α2)(1+4exp(−cot2θhα2)sin4θh)

For the full specular BRDF, we also follow [Neubelt13] and replace the traditional denominator with a smoother variant:

fr(v,h,α)=F(v,h)Dvelvet(v,h,α)4(nl+nv(nl)(nv))(38)(38)fr(v,h,α)=F(v,h)Dvelvet(v,h,α)4(n⋅l+n⋅v−(n⋅l)(n⋅v))

The implementation of the velvet NDF is presented in listing 15, optimized to properly fit in half float formats and to avoid computing a costly cotangent, relying instead on trigonometric identities.

float D_Ashikhmin(float linearRoughness, float NoH) {
    // Ashikhmin 2007, "Distribution-based BRDFs"
	float a2 = linearRoughness * linearRoughness;
	float cos2h = NoH * NoH;
	float sin2h = max(1.0 - cos2h, 0.0078125); // 2^(-14/2), so sin2h^2 > 0 in fp16
	float sin4h = sin2h * sin2h;
	float cot2 = -cos2h / (a2 * sin2h);
	return 1.0 / (PI * (4.0 * a2 + 1.0) * sin4h) * (4.0 * exp(cot2) + sin4h);
}
Listing 15: Implementation of Ashikhmin’s velvet NDF in GLSL

Sheen color

To offer better control over the appearance of cloth and to give users the ability to recreate two-tone specular materials, we introduce the ability to directly modify the specular reflectance. Figure 25 shows an example of using the parameter we call “sheen color”.

Figure 25: Blue fabric without (left) and with (right) sheen

Cloth diffuse BRDF

Our cloth material model still relies on a Lambertian diffuse BRDF. It is however slightly modified to be energy conservative (akin to the energy conservation of our clear coat material model) and offers an optional subsurface scattering term. This extra term is not physically-based and can be used to simulate the scattering, partial absorption and re-emission of light in certain types of fabrics.

First, here is the diffuse term without the optional subsurface scattering:

fd(v,h)=cdiffπ(1F(v,h))(39)(39)fd(v,h)=cdiffπ(1−F(v,h))

Where F(v,h)F(v,h) is the Fresnel term of the cloth specular BRDF in equation 3838.

Subsurface scattering is implemented using the wrapped diffuse lighting technique, in its energy conservative form:

fd(v,h)=cdiffπ(1F(v,h))nl+w(1+w)csubsurface+nl(40)(40)fd(v,h)=cdiffπ(1−F(v,h))⟨n⋅l+w(1+w)⟩⟨csubsurface+n⋅l⟩

Where ww is a value between 0 and 1 defining by how much the diffuse light should wrap around the terminator. To avoid introducing another parameter, we fix w=0.5w=0.5. Note that with wrap diffuse lighting, the diffuse term must not be multiplied by nln⋅l. The effect of this cheap subsurface scattering approximation can be seen in figure 26.

Figure 26: White cloth (left column) vs white cloth with brown subsurface scattering (right)

The complete implementation of our cloth BRDF, including sheen color and optional subsurface scattering, can be found in listing 16.

// specular BRDF
float D = distributionCloth(linearRoughness, NoH);
float V = visibilityCloth(NoV, NoL);
vec3  F = fresnel(sheenColor, LoH);
vec3 Fr = (D * V) * F;

// diffuse BRDF
float diffuse = diffuse(linearRoughness, NoV, NoL, LoH);
#if defined(MATERIAL_HAS_SUBSURFACE_COLOR)
// energy conservative wrap diffuse
diffuse *= saturate((dot(n, light.l) + 0.5) / 2.25);
#endif
vec3 Fd = (diffuse * (1.0 - F)) * pixel.diffuseColor;

#if defined(MATERIAL_HAS_SUBSURFACE_COLOR)
// cheap subsurface scatter
Fd *= saturate(subsurfaceColor + NoL);
vec3 color = Fd + Fr * NoL;
color *= (lightIntensity * lightAttenuation) * lightColor;
#else
vec3 color = Fd + Fr;
color *= (lightIntensity * lightAttenuation * NoL) * lightColor;
#endif
Listing 16: Implementation of our cloth BRDF in GLSL

Cloth parameterization

The cloth material model encompasses all the parameters previously defined for the standard material mode except for metallic and reflectance. Two extra parameters described in table 8are also available.

Parameter Definition
SheenColor Specular tint to create two-tone specular fabrics (defaults to 0.04 to match the standard reflectance)
SubsurfaceColor Tint for the diffuse color after scattering and absorption through the material
Table 8: Cloth model parameters

To create a velvet-like material, the base color can be set to black (or a dark color). Chromaticity information should instead be set on the sheen color. To create more common fabrics such as denim, cotton, etc. use the base color for chromaticity and use the default sheen color or set the sheen color to the luminance of the base color.

Lighting

The correctness and coherence of the lighting environment is paramount to achieving plausible visuals. After surveying existing rendering engines (such as Unity or Unreal Engine 4) as well as the traditional real-time rendering literature, it is obvious that coherency is rarely achieved.

The Unreal Engine, for instance, lets artists specify the “brightness” of a point light in lumens, a unit of luminous power. The brightness of directional lights is however expressed using an arbitrary unnamed unit. To match the brightness of a point light with a luminous power of 5,000 lumens, the artist must use a directional light of brightness 10. This kind of mismatch makes it difficult for artists to maintain the visual integrity of a scene when adding, removing or modifying lights. Using solely arbitrary units is a coherent solution but it makes reusing lighting rigs a difficult task. For instance, an outdoor scene will use a directional light of brightness 10 as the sun and all other lights will be defined relative to that value. Moving these lights to an indoor environment would make them too bright.

Our goal is therefore to make all lighting correct by default, while giving artists enough freedom to achieve the desired look. We will support a number of lights, split in two categories, direct and indirect lighting:

Direct lighting: punctual lights, photometric lights, area lights.

Indirect lighting: image based lights (IBLs), for both local2 and distant light probes.

2 Local light probes might be too expensive to support on mobile, we will first focus our efforts on distant light probes set at infinity

Units

The following sections will discuss how to implement various types of lights and the proposed equations make use of different symbols and units summarized in table 9.

Photometric term Notation Unit
Luminous power ΦΦ Lumen (lmlm)
Luminous intensity II Candela (cdcd) or lmsrlmsr
Illuminance EE Lux (lxlx) or lmm2lmm2
Luminance LL Nit (ntnt) or cdm2cdm2
Radiant power ΦeΦe Watt (WW)
Luminous efficacy ηη Lumens per watt (lmWlmW)
Luminous efficiency VV Percentage (%)
Table 9: Photometric units

To get properly coherent lighting, we must use light units that respect the ratio between various light intensities found in real-world scenes. These intensities can vary greatly, from around 800 lmlm for a household light bulb to 120,000 lxlx for a daylight sky and sun illumination.

The easiest way to achieve lighting coherency is to adopt physical light units. This will in turn enable full reusability of lighting rigs. Using physical light units also allows us to use a physically based camera.

Table 10 shows the light unit associated with each type of light we intend to support.

Light type Unit
Directional light Illuminance (lxlx or lmm2lmm2)
Point light Luminous power (lmlm)
Spot light Luminous power (lmlm)
Photometric light Luminous intensity (cdcd)
Masked photometric light Luminous power (lmlm)
Area light Luminous power (lmlm)
Image based light Luminance (cdm2cdm2)
Table 10: Intensity unity for each light type

Notes about the radiant power unit

Even though commercially available light bulbs often display their brightness in lumens on the packaging, it is common to refer to the brightness of a light bulb by using its required energy in watts. The number of watts only indicates how much energy a bulb uses, not how bright it is. It is even more important to understand this difference now that more energy efficient bulbs are readily available (halogens, LEDs, etc.).

However, since artists might be accustomed to gauging a light’s brightness by its power, we should allow users to use the power unit to define the brightness of a light. The conversion is presented in equation 4141.

Φ=Φeη(41)(41)Φ=Φeη

In equation 4141ηη is the luminous efficacy of the light, expressed in lumens per watt. Knowing that the maximum possible luminous efficacy is 683 lmWlmW we can also use luminous efficiency VV(also called luminous coefficient), as shown in equation 4242.

Φ=Φe683×V(42)(42)Φ=Φe683×V

Table 11 can be used as a reference to convert watts to lumens using either the luminous efficacy or the luminous efficiency of various types of lights. More specific values are available on Wikipedia’s luminous efficacy page.

Light type Efficacy ηη Efficiency VV
Incandescent 14-35 2-5%
LED 28-100 4-15%
Fluorescent 60-100 9-15%
Table 11: Efficacy and efficiency of various light types

Light units validation

One of the big advantages of using physical light units is the ability to physically validate our equations. We can use specialized devices to measure three light units.

Illuminance

The illuminance reaching a surface can be measured using an incident light meter. For our tests, we use a Sekonic L-478D, shown in figure 27.

The incident light meter uses a white diffuse dome to capture the illuminance reaching a surface. It is important to orient the dome properly depending on the desired measurement. For instance, orienting the dome perpendicular to the sun on a bright clear day will give very different results than orienting the dome horizontally.

Figure 27: Sekonic L-478D incident light meter

Luminance

The luminance at a surface, or the product of the incident light and the surface, can be measured using a luminance meter, also often called a spot meter. While incident light meters use a diffuse hemisphere to capture light from all directions, a spot meter uses a shield to measure incident light from a single direction. For our tests, we use a Sekonic 5° Viewfinder that can replace the diffuser on the L-478D to measure luminance in a 5° cone.

Sekonic L-478D working as a luminance meter using a special viewfinder

Luminous intensity

The luminous intensity of a light source cannot be measured directly but can be derived from the measured illuminance if we know the distance between the measuring device and the light source. Equation 4343 is a simple application of the inverse square law discussed in section 4.2.2.

I=Ed2(43)(43)I=E⋅d2

Direct lighting

We have defined the light units for all the light types supported by the renderer in the section above but we have not defined the light unit for the result of the lighting equations. Choosing physical light units means that we will compute luminance values in our shaders, and therefore that all our light evaluation functions will compute the luminance LoutLout (or outgoing radiance) at any given point. The luminance depends on the illuminance EE and the BSDF f(v,l)f(v,l) :

Lout=f(v,l)E(44)(44)Lout=f(v,l)E

Directional lights

The main purpose of directional lights is to recreate important light sources for outdoor environment, i.e. the sun and/or the moon. While directional lights do not truly exist in the physical world, any light source sufficiently far from the light receptor can be assumed to be directional (i.e. all the incident light rays are parallel, as shown in figure 28).

Figure 28: Interaction between a directional light and a surface. The light source is a virtual construct that can only be represented by a direction

This approximation proves to work incredibly well for the diffuse response of a surface but the specular response is incorrect. The Frostbite engine solves this problem by treating the “sun” directional light as a disc area light. However, our tests have shown that the quality increase does not justify the added computational costs.

We earlier stated that we chose an illuminance light unit (lxlx) for directional lights. This is in part due to the fact that we can easily find illuminance values for the sky and the sun (online or with a light meter) but also to simplify the luminance equation described in 4444.

Lout=f(v,l)Enl(45)(45)Lout=f(v,l)E⊥⟨n⋅l⟩

In the simplified luminance equation 4545EE⊥ is the illuminance of the light source for a surface perpendicular to said light source. If the directional light source simulates the sun, EE⊥ is the illuminance of the sun for a surface perpendicular to the sun direction.

Table 12 provides useful reference values for the sun and sky illumination, measured3 on a clear day in March, in California.

Light 10am 12pm 5:30pm
Sky+SunSky⊥+Sun⊥ 120,000 130,000 90,000
SkySky⊥ 20,000 25,000 9,000
SunSun⊥ 100,000 105,000 81,000
Table 12: Illuminance values in lxlx (a full moon has an illuminance of 1 lxlx)

Dynamic directional lights are particulary cheap to evaluate at runtime, as shown in listing 17.

vec3 l = normalize(-lightDirection);
float NoL = clamp(dot(n, l), 0.0, 1.0);

// lightIntensity is the illuminance
// at perpendicular incidence in lux
float illuminance = lightIntensity * NoL;
float luminance = BSDF(v, l) * illuminance;
Listing 17: Implementation of directional lights in GLSL

Figure 29 shows the effect of lighting a simple scene with a directional light setup to approximate a midday Sun (illuminance set to 110,000 lxlx). For illustration purposes, only direct lighting is shown.

Figure 29: Series of dielectric materials of varying roughness under a directional light
3 Measurements taken with an incident light meter (Sekonic L-478D)

Punctual lights

Our engine will support two types of punctual lights, commonly found in most if not all rendering engines: point lights and spot lights. These types of lights are traditionally physically inaccurate for two reasons:

  1. They are truly punctual and infinitesimally small.
  2. They do not follow the inverse square law.

The first issue can be addressed with area lights but, given the cheaper nature of punctual lights it is deemed practical to use infinitesimally small punctual lights whenever possible.

The second issue is easy to fix. For a given punctual light, the perceived intensity decreases proportionally to the square of the distance from the viewer (more precisely, the light receptor).

For punctual lights following the inverse square law, the term EE of equation 4444 is expressed in equation 4646, where dd is the distance from a point at the surface to the light.

E=Linnl=Id2nl(46)(46)E=Lin⟨n⋅l⟩=Id2⟨n⋅l⟩

The difference between point and spot lights lies in how EE is computed, and in particular how the luminous intensity II is computed from the luminous power ΦΦ.

Point lights

A point light is defined only by a position in space, as shown in figure 30.

Figure 30: Interaction between a point light and a surface. The attenuation only depends on the distance to the light

The luminous power of a point light is calculated by integrating the luminous intensity over the light’s solid angle, as show in equation 4747. The luminous intensity can then be easily derived from the luminous power.

Φ=ΩIdl=2π0π0Idθdϕ=4πII=Φ4π(47)(47)Φ=∫ΩIdl=∫02π∫0πIdθdϕ=4πII=Φ4π

By simple subsitution of II in 4646 and EE in 4444 we can formulate the luminance equation of a point light as a function of the luminous power (see 4848).

Lout=f(v,l)Φ4πd2nl(48)(48)Lout=f(v,l)Φ4πd2⟨n⋅l⟩

Figure 31 shows the effect of lighting a simple scene with a point light subject to distance attenuation. Light falloff is exaggerated for illustration purposes.

Figure 31: Inverse square law applied to point lights evaluation

Spot lights

A spot light is defined by a position in space, a direction vector and two cone angles, θinnerθinner and θouterθouter (see figure 32). These two angles are used to define the angular falloff attenuation of the spot light. The light evaluation function of a spot light must therefore take into account both the inverse square law and these two angles to properly evaluate the luminance attenuation.

Figure 32: Interaction between a spot light and a surface. The attenuation depends on the distance to the light and the angle between the surface the spot light’s direction vector

Equation 4949 describes how the luminous power of a spot light can be calculated in a similar fashion to point lights, using θouterθouter the outer angle of the spot light’s cone in the range [0..ππ].

Φ=ΩIdl=2π0θouter0Idθdϕ=2π(1cosθouter2)II=Φ2π(1cosθouter2)(49)(49)Φ=∫ΩIdl=∫02π∫0θouterIdθdϕ=2π(1−cosθouter2)II=Φ2π(1−cosθouter2)

While this formulation is physically correct, it makes spot lights a little difficult to use: changing the outer angle of the cone changes the illumination levels. Figure 33 shows the same scene lit by a spot light, with an outer angle of 55° and an outer angle of 15°. Observes how the illumination level increases as the cone aperture decreases.

Figure 33: Comparison of spot light outer angles, 55° (left) and 15° (right)

The coupling of illumination and the outer cone means that an artist cannot tweak the influence cone of a spot light without also changing the perceived illumination. It therefore makes sense to provide artists with a parameter to disable this coupling. Equations 5050 shows how to fomulate the luminous power for that purpose.

Φ=πII=Φπ(50)(50)Φ=πII=Φπ

With this new formulation to compute the luminous intensity, the test scene in figure 34exhibits similar illumination levels with both cone apertures.

Figure 34: Comparison of spot light outer angles, 55° (left) and 15° (right)

This new formulation can also be considered physically based if the spot’s reflector is replaced with a matte, diffuse mask that absorbs light perfectly.

The spot light evaluation function can be expressed in two ways:

  • With a light absorber
    Lout=f(v,l)Φπd2nlλ(l)(51)(51)Lout=f(v,l)Φπd2⟨n⋅l⟩λ(l)
  • With a light reflector
    Lout=f(v,l)Φ2π(1cosθouter2)d2nlλ(l)(52)(52)Lout=f(v,l)Φ2π(1−cosθouter2)d2⟨n⋅l⟩λ(l)

The term λ(l)λ(l) in equations 5151 and 5252 is the spot’s angle attenuation factor described in equation5353 below.

λ(l)=l×spotDirectioncosθoutercosθinnercosθouter(53)(53)λ(l)=l×spotDirection−cosθoutercosθinner−cosθouter

Attenuation function

A proper evaluation of the inverse square law attenuation factor is mandatory for physically based punctual lights. The simple mathematical formulation is unfortunately impractical for implementation purposes:

  1. The division by the squared distance can lead to divides by 0 when objects intersect or “touch” light sources.
  2. The influence sphere of each light is infinite (Id2Id2 is asymptotic, it never reaches 0) which means that to correctly shade a pixel we need to evaluate every light in the world.

The first issue can be solved easily by setting the assumption that punctual lights are not truly punctual but instead small area lights. To do this we can simply treat punctual lights as spheres of 1 cm radius, as show in equation 5454.

E=Imax(d2,0.012)(54)(54)E=Imax(d2,0.012)

We can solve the second issue by introducing an influence radius for each light. There are several advantages to this solution. Tools can quickly show artists what parts of the world will be influenced by every light (the tool just needs to draw a sphere centered on each light). The rendering engine can cull lights more aggressively using this extra piece of information and artists/developers can assist the engine by manually tweaking the influence radius of a light.

Mathematically, the illuminance of a light should smoothly reach zero at the limit defined by the influence radius. [Karis13] proposes to window the inverse square function in such a way that the majority of the light’s influence remains unaffected. The proposed windowing is described in equation 5555, where rr is the light’s radius of influence.

E=Imax(d2,0.012)1d4r2(55)(55)E=Imax(d2,0.012)⟨1−d4r2⟩

Listing 18 demonstrates how to implement physically based punctual lights in GLSL. Note that the light intensity used in this piece of code is the luminous intensity II in cdcd, converted from the luminous power CPU-side. This snippet is not optimized and some of the computations can be offloaded to the CPU (for instance the square of the light’s inverse falloff radius, or the spot scale and angle).

float getSquareFalloffAttenuation(vec3 posToLight, float lightInvRadius) {
    float distanceSquare = dot(posToLight, posToLight);
    float factor = distanceSquare * lightInvRadius * lightInvRadius;
    float smoothFactor = max(1.0 - factor * factor, 0.0);
    return (smoothFactor * smoothFactor) / max(distanceSquare, 1e-4);
}

float getSpotAngleAttenuation(vec3 l, vec3 lightDir,
        float innerAngle, float outerAngle) {
    // the scale and offset computations can be done CPU-side
    float cosOuter = cos(outerAngle);
    float spotScale = 1.0 / max(cos(innerAngle) - cosOuter, 1e-4)
    float spotOffset = -cosOuter * spotScale

    float cd = dot(normalize(-lightDir), l);
    float attenuation = clamp(cd * spotScale + spotOffset, 0.0, 1.0);
    return attenuation * attenuation;
}

vec3 evaluatePunctualLight() {
    vec3 l = normalize(posToLight);
    float NoL = clamp(dot(n, l), 0.0, 1.0);
    vec3 posToLight = lightPosition - worldPosition;

    float attenuation;
    attenuation  = getSquareFalloffAttenuation(posToLight, lightInvRadius);
    attenuation *= getSpotAngleAttenuation(l, lightDir, innerAngle, outerAngle);

    float luminance = (BSDF(v, l) * lightIntensity * attenuation * NoL) * lightColor;
    return luminance;
}
Listing 18: Implementation of punctual lights in GLSL

Photometric lights

Punctual lights are an extremely practical and efficient way to light a scene but do not give artists enough control over the light distribution. The field of architectural lighting design concerns itself with designing lighting systems to serve humans needs by taking into account:

  • The amount of light provided
  • The color of the light
  • The distribution of light within the space

The lighting system we have described so far can easily address the first two points but we need a way to define the distribution of light within the space. Light distribution is especially important for indoor scenes or for some types of outdoor scenes or even road lighting. Figure 35shows scenes where the light distribution is controlled by the artist. This type of distribution control is widely used when putting objects on display (museums, stores or galleries for instance).

Figure 35: Controlling the distribution of a point light

Photometric lights use a photometric profile to describe their intensity distribution. There are two commonly used formats, IES (Illuminating Engineering Society) and EULUMDAT (European Lumen Data format) but we will focus on the former. IES profiles are supported by many tools and engines, such as Unreal Engine 4, Frostbite, Renderman, Maya and Killzone. In addition, IES light profiles are commonly made available by bulbs and luminaires manufacturers (Philips offers an extensive array of IES files for download for instance). Photometric profiles are particularly useful when they measure a luminaire or light fixture, in which the light source is partially covered. The luminaire will block the light emitted in certain directions, thus shaping the light distribution.

Example of a real world luminaires that can be described by photometric profiles

An IES profile stores luminous intensity for various angles on a sphere around the measured light source. This spherical coordinate system is usually referred to as the photometric web, which can be visualized using specialized tools such as IESviewerFigure 36 below shows the photometric web of the XArrow IES profile provided by Pixar for use with Renderman. This picture also shows a rendering in 3D space of the XArrow IES profile by our tool lightgen.

Figure 36: The XArrow IES profile rendered as a photometric web and as a point light in 3D space

The IES format is poorly documented and it is not uncommon to find syntax variations between files found on the Internet. The best resource to understand IES profile is Ian Ashdown’s “Parsing the IESNA LM-63 photometric data file” document [Ashdown98]. Succinctly, an IES profiles stores luminous intensities in candela at various angles around the light source. For each measured horizontal angle, a series of luminous intensities at different vertical angles is provided. It is however fairly common for measured light sources to be horizontally symmetrical. The XArrow profile shown above is a good example: intensities vary with vertical angles (vertical axis) but are symmetrical on the horizontal axis. The range of vertical angles in an IES profile is 0 to 180° and the range of horizontal angles is 0 to 360°.

Figure 37 shows the series of IES profiles provided by Pixar for Renderman, rendered using our lightgen tool.

Figure 37: Series of IES light profiles rendered with lightgen

IES profiles can be applied directly to any punctual light, point or spot. To do so, we must first process the IES profile and generate a photometric profile as a texture. For performance considerations, the photometric profile we generate is a 1D texture that represents the average luminous intensity for all horizontal angles at a specific vertical angle (i.e., each pixel represents a vertical angle). To truly represent a photometric light, we should use a 2D texture but since most lights are fully, or mostly, symmetrical on the horizontal plane, we can accept this approximation. The values stored in the texture are normalized by the inverse maximum intensity defined in the IES profile. This allows us to easily store the texture in any float format or, at the cost of a bit of precision, in a luminance 8-bit texture (grayscale PNG for instance). Storing normalized values also allows us to treat photometric profiles as a mask:

Photometric profile as a mask
The luminous intensity is defined by the artist by setting the luminous power of the light, as with any other punctual light. The artist defined intensity is divided by the intensity of the light computed from the IES profile. IES profiles contain a luminous intensity but it is only valid for a bare light bulb whereas the measured intensity values take into account the light fixture. To measure the intensity of the luminaire, instead of the bulb, we perform a Monte-Carlo integration of the unit sphere using the intensities from the profile4.

Photometric profile
The luminous intensity comes from the profile itself. All the values sampled from the 1D texture are simply multiplied by the maximum intensity. We also provide a multiplier for convenience.

The photometric profile can be applied at rendering time as a simple attenuation. The luminance equation 5656 describes the photometric point light evaluation function.

Lout=f(v,l)Id2nlΨ(l)(56)(56)Lout=f(v,l)Id2⟨n⋅l⟩Ψ(l)

The term Ψ(l)Ψ(l) is the photometric attenuation function. It depends on the light evector, but also on the direction of the light. Spot lights already possess a direction vector but we need to introduce one for photometric point lights as well.

The photometric attenuation function can be easily implemented in GLSL by adding a new attenuation factor to the implementation of punctual lights (listing 18). The modified implementation is show in listing 19.

float getPhotometricAttenuation(vec3 posToLight, vec3 lightDir) {
    float cosTheta = dot(-posToLight, lightDir);
    float angle = acos(cosTheta) * (1.0 / PI);
    return texture2DLodEXT(lightProfileMap, vec2(angle, 0.0), 0.0).r;
}

vec3 evaluatePunctualLight() {
    vec3 l = normalize(posToLight);
    float NoL = clamp(dot(n, l), 0.0, 1.0);
    vec3 posToLight = lightPosition - worldPosition;

    float attenuation;
    attenuation  = getSquareFalloffAttenuation(posToLight, lightInvRadius);
    attenuation *= getSpotAngleAttenuation(l, lightDirection, innerAngle, outerAngle);
    attenuation *= getPhotometricAttenuation(l, lightDirection);

    float luminance = (BSDF(v, l) * lightIntensity * attenuation * NoL) * lightColor;
    return luminance;
}
Listing 19: Implementation of attenuation from photometric profiles in GLSL

The light intensity is computed CPU-side (listing 20) and depends on whether the photometric profile is used as a mask.

float multiplier;
// Photometric profile used as a mask
if (photometricLight.isMasked()) {
    // The desired intensity is set by the artist
    // The integrated intensity comes from a Monte-Carlo
    // integration over the unit sphere around the luminaire
    multiplier = photometricLight.getDesiredIntensity() /
            photometricLight.getIntegratedIntensity();
} else {
    // Multiplier provided for convenience, set to 1.0 by default
    multiplier = photometricLight.getMultiplier();
}

// The max intensity in cd comes from the IES profile
float lightIntensity = photometricLight.getMaxIntensity() * multiplier;
Listing 20: Computing the intensity of a photometric light on the CPU
4 The XArrow profile declares a luminous intensity of 1,750 lm but a Monte-Carlo integration shows an intensity of only 350 lm.

Area lights

[TODO]

Lights parameterization

Similarly to the parameterization of the standard material model, our goal is to make lights parameterization intuitive and easy to use for artists and developers alike. In that spirit, we decided to separate the light color (or hue) from the light intensity. A light color will therefore be defined as a linear RGB color (or sRGB in the tools UI for convenience).

The full list of light parameters is presented in table 13.

Parameter Definition
Type Directional, point, spot or area
Direction Used for directional lights, spot lights, photometric point lights, and linear and tubular area lights (orientation)
Color The color of emitted light, as a linear RGB color. Can be specified as an sRGB color or a color tempetature in the tools
Intensity The light’s brightness. The unit depends on the type of light
Falloff radius Maximum distance of influence
Inner angle Angle of the inner cone for spot lights, in degrees
Outer angle Angle of the outer cone for spot lights, in degrees
Length Length of the area light, used to create linear or tubular lights
Radius Radius of the area light, used to create spherical or tubular lights
Photometric profile Texture representing a photometric light profile, works only for punctual lights
Masked profile Boolean indicating whether the IES profile is used as a mask or not. When used as a mask, the light’s brightness will be multiplied by the ratio between the user specified intensity and the integrated IES profile intensity. When not used as a mask, the user specified intensity is ignored but the IES multiplier is used instead
Photometric multiplier Brightness multiplier for photometric lights (if IES as mask is turned off)
Table 13: Light types parameters

Note: to simplify the implementation, all luminous powers will converted to luminous intensities (cdcd) before being sent to the shader. The conversion is light dependent and is explained in the previous sections.

Note: the light type can be inferred from other parameters (e.g. a point light has a length, radius, inner angle and outer angle of 0).

Color temperature

However, real-world artificial lights are often defined by their color temperature, measured in Kelvin (K). The color temperature of a light source is the temperature of an ideal black-body radiator that radiates light of comparable hue to that of the light source. For convenience, the tools should allow the artist to specify the hue of a light source as a color temperature (a meaningful range is 1,000 K to 12,500 K).

To compute RGB values from a temperature, we can use the Planckian locus, shown in figure 38. This locus is the path that the color of an incandescent black body takes in a chromaticity space as the body’s temperature changes.

Figure 38: The Planckian locus visualized on a CIE 1931 chromaticity diagram (source: Wikipedia)

The easiest way to compute RGB values from this locus is to use the formula described in [Krystek85]. Krystek’s algorithm (equation 5757) works in the CIE 1960 (UCS) space, using the following formula where TT is the desired temperature, and uu and vv the coordinates in UCS.

u(T)=0.860117757+1.54118254×104T+1.28641212×107T21+8.42420235×104T+7.08145163×107T2v(T)=0.317398726+4.22806245×105T+4.20481691×108T212.89741816×105T+1.61456053×107T2(57)(57)u(T)=0.860117757+1.54118254×10−4T+1.28641212×10−7T21+8.42420235×10−4T+7.08145163×10−7T2v(T)=0.317398726+4.22806245×10−5T+4.20481691×10−8T21−2.89741816×10−5T+1.61456053×10−7T2

This approximation is accurate to roughly 9×1059×10−5 in the range 1,000K to 15,000K. From the CIE 1960 space we can compute the coordinates in xyY space (CIES 1931), using the formula from equation 5858.

x=3u2u8v+4y=2v2u8v+4(58)(58)x=3u2u−8v+4y=2v2u−8v+4

The formulas above are valid for black body color temperatures, and therefore correlated color temperatures of standard illuminants. If we wish to compute the precise chromaticity coordinates of standard CIE illuminants in the D series we can use equation 5959.

x=⎧⎩⎨0.244063+0.09911103T+2.9678106T24.6070109T30.237040+0.24748103T+1.9018106T22.0064109T34,000KT7,000K7,000KT25,000Ky=3x2+2.87x0.275(59)(59)x={0.244063+0.09911103T+2.9678106T2−4.6070109T34,000K≤T≤7,000K0.237040+0.24748103T+1.9018106T2−2.0064109T37,000K≤T≤25,000Ky=−3×2+2.87x−0.275

From the xyY space, we can then convert to the CIE XYZ space (equation 6060).

X=xYyZ=(1xy)Yy(60)(60)X=xYyZ=(1−x−y)Yy

For our needs, we will fix Y=1Y=1. This allows us to convert from the XYZ space to linear RGB with a simple 3×3 matrix, as shown in equation 6161.

⎡⎣⎢RGB⎤⎦⎥=M1⎡⎣⎢XYZ⎤⎦⎥(61)(61)[RGB]=M−1[XYZ]

The transformation matrix M is calculated from the target RGB color space primaries. Equation 6262 shows the conversion using the inverse matrix for the sRGB color space.

⎡⎣⎢RGB⎤⎦⎥=⎡⎣⎢3.24045420.96926600.05564341.53713851.87601080.20402590.49853140.04155601.0572252⎤⎦⎥⎡⎣⎢XYZ⎤⎦⎥(62)(62)[RGB]=[3.2404542−1.5371385−0.4985314−0.96926601.87601080.04155600.0556434−0.20402591.0572252][XYZ]

The result of these operations is a linear RGB triplet in the sRGB color space. Since we care about the chromaticity of the results, we must apply a normalization step to avoid clamping values greater than 1.0 and distort resulting colors:

C^linear=Clinearmax(Clinear)(63)(63)C^linear=Clinearmax(Clinear)

We must finally apply the sRGB opto-electronic conversion function (OECF, shown in equation 6464) to obtain a displayable value (the value should remain linear if passed to the renderer for shading).

CsRGB={12.92×C^linear1.055×C^12.4linear0.055C^linear0.0031308C^linear>0.0031308(64)(64)CsRGB={12.92×C^linearC^linear≤0.00313081.055×C^linear12.4−0.055C^linear>0.0031308

For convenience, figure 39 shows the range of correlated color temperatures from 1,000K to 12,500K. All the colors used below assume CIE D65D65 as the white point (as is the case in the sRGB color space).

Figure 39: Scale of correlated color temperatures

Similarly, figure 40 shows the range of CIE standard illuminants series D from 1,000K to 12,500K.

Figure 40: Scale of CIE standard illuminants series D

For reference, figure 41 shows the range of correlated color temperatures without the normalization step presented in equation 6363.

Figure 41: Unnormalized scale of correlated color temperatures

Table 14 presents the correlated color temperature of various common light sources as sRGB color swatches. These colors are relative to the D65D65 white point, so their perceived hue might vary based on your display’s white point. See What colour is the Sun? for more information.

Temperature (K) Light source Color
1,700-1,800 Match flame
1,850-1,930 Candle flame
2,000-3,000 Sun at sunrise/sunset
2,500-2,900 Household tungsten lightbulb
3,000 Tungsten lamp 1K
3,200-3,500 Quartz lights
3,200-3,700 Fluorescent lights
3,275 Tungsten lamp 2K
3,380 Tungsten lamp 5K, 10K
5,000-5,400 Sun at noon
5,500-6,500 Daylight (sun + sky)
5,500-6,500 Sun through clouds/haze
6,000-7,500 Overcast sky
6,500 RGB monitor white point
7,000-8,000 Shaded areas outdoors
8,000-10,000 Partly cloudy sky
Table 14: Normalized correlated color temperatures for common light sources

Image based lights

In real life, light comes from every directions either directly from light sources or indirectly after bouncing of off objects in the environment, being partially absorbed in the process. In a way the whole environment around an object can be seen as a light source. Images, in particular cubemaps, are a great way to encode such an “environment light”. This is called Image Based Lighting (IBL) or sometimes Indirect Lighting.

There are limitations with image-based lighting. Obviously the environment image must be acquired somehow and as we’ll see below it needs to be pre-processed before it can be used for lighting. Typically, the environment image is acquired offline in the real world, or generated by the engine either offline or at run time; either way, local or distant probes are used.

These probes can be used to acquire the distant or local environment. In this document, we’re focusing on distant environment probes, where the light is assumed to come from infinitely far away (which means every point on the object’s surface uses the same environment map).

The whole environment contributes light to a given point on the object’s surface; this is called irradiance (EE). The resulting light bouncing off of the object is called radiance (LoutLout). Incident lighting must be applied consistently to the diffuse and specular parts of the BRDF.

The radiance LoutLout resulting from the interaction between an image based light’s (IBL) irradiance and a material model (BRDF) f(Θ)f(Θ)5 is computed as follows:

Lout(n,v,Θ)=Ωf(l,v,Θ)L(l)nldl(65)(65)Lout(n,v,Θ)=∫Ωf(l,v,Θ)L⊥(l)⟨n⋅l⟩dl

Note that here we’re looking at the behavior of the surface at macro level (not to be confused with the micro level equation), which is why it only depends on n⃗ n→ and v⃗ v→. Essentially, we’re applying the BRDF to “point-lights” coming from all directions and encoded in the IBL.

IBL Types

There are four common types of IBLs used in modern rendering engines:

  • Distant light probes, used to capture lighting information at “infinity”, where parallax can be ignored. Distant probes typically contain the sky, distant landscape features or buildings, etc. They are either captured by the engine or acquired from a camera as high dynamic range images (HDRI).
  • Local light probes, used to capture a certain area of the world from a specific point of view. The capture is projected on a cube or sphere depending on the surrounding geometry. Local probes are more accurate than distance probes and are particularly useful to add local reflections to materials.
  • Planar reflections, used to capture reflections by rendering the scene mirrored by a plane. This technique works only for flat surfaces such as building floors, roads and water.
  • Screen space reflection, used to capture reflections based on the rendered scene (using the previous frame for instance) by ray-marching in the depth buffer. SSR gives great result but can be very expensive.

In addition we must distinguish between static and dynamic IBLs. Implementing a fully dynamic day/night cycle requires for instance to recompute the distant light probes dynamically6. Both planar and screen space reflections are inherently dynamic.

IBL Unit

As discussed previously in the direct lighting section, all our lights must use physical units. As such our IBLs will use the luminance unit cdm2cdm2, which is also the output unit of all our direct lighting equations. Using the luminance unit is straightforward for light probes captures by the engine (dynamically or statically offline).

High dynamic range images are a bit more delicate to handle however. Cameras do not record measured luminance but a device-dependent value that is only related to the original scene luminance. As such, we must provide artists with a multiplier that allows them to recover, or at the very least closely approximate, the original absolute luminance.

To properly reconstruct the luminance of an HDRI for IBL, artists must do more than simply take photos of the environment and record extra information:

  • Color calibration: using a gray card or a MacBeth ColorChecker
  • Camera settings: aperture, shutter and ISO
  • Luminance samples: using a spot/luminance meter

[TODO] Measure and list common luminance values (clear sky, interior, etc.)

Processing light probes

We saw previously that the radiance of an IBL is computed by integrating over the surface’s hemisphere. Since this would obviously be too expensive to do in real-time, we must first pre-process our light probes to convert them into a format better suited for real-time interactions.

The sections below will discuss the techniques used to accelerate the evaluation of light probes:

  • Specular reflectance: pre-filtered importance sampling and split-sum approximation
  • Diffuse reflectance: irradiance map and spherical harmonics

Distant light probes

Diffuse BRDF integration

Using the lambertian BRDF7, we get the radiance:

fd(σ)Ld(n,σ)=σπ=Ωfd(σ)L(l)nldl=σπΩL(l)nldl=σπEd(n)with the irradianceEd(n)=ΩL(l)nldlfd(σ)=σπLd(n,σ)=∫Ωfd(σ)L⊥(l)⟨n⋅l⟩dl=σπ∫ΩL⊥(l)⟨n⋅l⟩dl=σπEd(n)with the irradianceEd(n)=∫ΩL⊥(l)⟨n⋅l⟩dl

Or in the discrete domain:

Ed(n)iimageL(si)nsiΩsEd(n)≡∑∀i∈imageL⊥(si)⟨n⋅si⟩Ωs

ΩsΩs is the solid-angle8 associated to sample ii.

The irradiance integral EdEd can be trivially, albeit slowly9, precomputed and stored into a cubemap for efficient access at runtime. Typically, image is a cubemap or an equirectangular image. The term σπσπ is independent of the IBL and is added at runtime to obtain the radiance.

Figure 42: Image-based environment

Figure 43: Image-based irradiance map using the lambertian BRDF

However, the irradiance can also be approximated very closely by a decomposition into Spherical Harmonics (SH, described in more details in the Spherical Harmonics section) and calculated at runtime cheaply. It is usually best to avoid texture fetches on mobile and free-up a texture unit. Even if it is stored into a cubemap, it is orders of magnitude faster to pre-compute the integral using SH decomposition followed by a rendering.

SH decomposition is similar in concept to a Fourier transform, it expresses the signal over an orthonormal base in the frequency domain. The properties that interests us most are:

  • Very few coefficients are needed to encode cosθ⟨cos⁡θ⟩
  • Convolutions by a kernel that has a circular symmetry are very inexpensive and become products in SH space

In practice only 4 or 9 coefficients (i.e.: 2 or 3 bands) are enough for cosθ⟨cos⁡θ⟩ meaning we don’t need more either for LL⊥.

Figure 44: 3 bands (9 coefficients)

Figure 45: 2 bands (4 coefficients)

In practice we pre-convolve LL⊥ with cosθ⟨cos⁡θ⟩ and pre-scale these coefficients by the basis scaling factors KmlKlm so that the reconstruction code is as simple as possible in the shader:

vec3 irradianceSH(vec3 n) {
    // uniform vec3 sphericalHarmonics[9]
    // We can use only the first 2 bands for better performance
    return
          sphericalHarmonics[0]
        + sphericalHarmonics[1] * (n.y)
        + sphericalHarmonics[2] * (n.z)
        + sphericalHarmonics[3] * (n.x)
        + sphericalHarmonics[4] * (n.y * n.x)
        + sphericalHarmonics[5] * (n.y * n.z)
        + sphericalHarmonics[6] * (3.0 * n.z * n.z - 1.0)
        + sphericalHarmonics[7] * (n.z * n.x)
        + sphericalHarmonics[8] * (n.x * n.x - n.y * n.y);
}
Listing 21: GLSL code to reconstruct the irradiance from the pre-scaled SH

Note that with 2 bands, the computation above becomes a single 4×44×4 matrix-by-vector multiply.

Additionally, because of the pre-scaling by KmlKlm, the SH coefficients can be thought of as colors, in particular sphericalHarmonics[0] is directly the average irradiance.

Specular BRDF integration

As we’ve seen above, the radiance LoutLout resulting from the interaction between an IBL’s irradiance and a BRDF is:

Lout(n,v,Θ)=Ωf(l,v,Θ)L(l)nldl(66)(66)Lout(n,v,Θ)=∫Ωf(l,v,Θ)L⊥(l)⟨n⋅l⟩dl

We recognize the convolution of LL⊥ by f(l,v,Θ)nlf(l,v,Θ)⟨n⋅l⟩, i.e.: the IBL is filtered by the BRDF. Plugging the expression of ff in equation 6666, we obtain:

Lout(n,v,Θ)=ΩD(l,v,α)F(l,v,f0,f90)V(l,v,α)nlL(l)dl(67)(67)Lout(n,v,Θ)=∫ΩD(l,v,α)F(l,v,f0,f90)V(l,v,α)⟨n⋅l⟩L⊥(l)dl

This expression depends on v⃗ v→ααf0f0 and f90f90 inside the integral, which makes its evaluation extremely costly and unsuitable for real-time on mobile (even using pre-filtered importance sampling).

Simplifying the BRDF integration

In order to find a suitable approximation, let’s first look at the special case where L(l)=LconstantL⊥(l)=L⊥constant:

Lout(n,v,Θ)=LconstantΩD(l,v,α)F(l,v,f0,f90)V(l,v,α)nldl(68)(68)Lout(n,v,Θ)=L⊥constant∫ΩD(l,v,α)F(l,v,f0,f90)V(l,v,α)⟨n⋅l⟩dl
F(l,v,f0)DV(h,α)=f0+(f90f0)Fc(h)withFc(h)=(1lh)5=f0(1Fc(h))+f90Fc(h)=D(l,v,α)V(l,v,α)F(l,v,f0)=f0+(f90−f0)Fc(h)withFc(h)=(1−l⋅h)5=f0(1−Fc(h))+f90Fc(h)DV(h,α)=D(l,v,α)V(l,v,α)

Plugging FF into equation 6868:

Lout(n,v,Θ)=Lconstant[f0Ω(1Fc(h))DV(h,α)nl+f90ΩFc(h)DV(h,α)nl]Lout(n,v,Θ)=L⊥constant[f0∫Ω(1−Fc(h))DV(h,α)⟨n⋅l⟩+f90∫ΩFc(h)DV(h,α)⟨n⋅l⟩]

This expression can easily be precomputed in two 2D tables, as it depends only on nvn⋅v and αα:

DFV1(nv,α)DFV2(nv,α)=Ω(1Fc(h))DV(h,α)nldl=ΩFc(h)DV(h,α)nldlDFV1(n⋅v,α)=∫Ω(1−Fc(h))DV(h,α)⟨n⋅l⟩dlDFV2(n⋅v,α)=∫ΩFc(h)DV(h,α)⟨n⋅l⟩dl
Lconstantout(n,v,Θ)=Lconstant[f0DFV1(nv,α)+f90DFV2(nv,α)](69)(69)Loutconstant(n,v,Θ)=L⊥constant[f0DFV1(n⋅v,α)+f90DFV2(n⋅v,α)]

This result is exact only when LL⊥ is constant and known, or more precisely, it gives the radiance contributed by the average of the irradiance (i.e.: the D.C. term).

Now, let’s look at the general case, where LL⊥ isn’t constant:

Lout(n,v,Θ)=ΩD(h,α)F(l,v,f0,f90)V(h,α)nlL(l)dl(70)(70)Lout(n,v,Θ)=∫ΩD(h,α)F(l,v,f0,f90)V(h,α)⟨n⋅l⟩L⊥(l)dl

Since we can’t compute this integral in real-time, we’re simply going to assumes:

  • v⃗ =n⃗ v→=n→ : this is assuming we’re looking at the surface in the direction of its normal
  • f90=0f90=0

Equation 7070 simplifies greatly to:

LD(n,α)=ΩF(l,n,f0)V(h,α)D(h,α)nlL(l)dl=f0Ω(1Fc(h))V(h,α)D(h,α)nlL(l)dlLD(n,α)=∫ΩF(l,n,f0)V(h,α)D(h,α)⟨n⋅l⟩L⊥(l)dl=f0∫Ω(1−Fc(h))V(h,α)D(h,α)⟨n⋅l⟩L⊥(l)dl

Now, let’s look at the behavior of this expression when L(l)=LconstantL⊥(l)=L⊥constant

LDconstant(n,α)=Lconstantf0Ω(1Fc(h))V(h,α)D(h,α)nldl(71)(71)LDconstant(n,α)=L⊥constantf0∫Ω(1−Fc(h))V(h,α)D(h,α)⟨n⋅l⟩dl

This scales LconstantL⊥constant (i.e.: the D.C. term of the irradiance) by a factor :

K(α)=f0Ω(1Fc(h))V(h,α)D(h,α)nldlK(α)=f0∫Ω(1−Fc(h))V(h,α)D(h,α)⟨n⋅l⟩dl

By multiplying together equation 6969 with Lconstant=1L⊥constant=1 and equation 7171 normalized by K(α)K(α), we obtain:

Lout(n,v,α,f0,f90)=[f0DFV1(nv,α)+f90DFV2(nv,α)]×1K(α)LD(n,α)Lout(n,v,α,f0,f90)=[f0DFV1(n⋅v,α)+f90DFV2(n⋅v,α)]×1K(α)LD(n,α)

This expression is exact when the irradiance is constant. In fact, it is exact for the D.C. component of the irradiance. It is also exact when v⃗ =n⃗ v→=n→.

1K(α)LD(n,α)1K(α)LD(n,α) can easily be precomputed into a mip-mapped cubemap where each mipmap level contains the radiance for a different value of αα. Also note that f0f0 being a constant, it disapears entirely from LD()LD() and K(α)K(α).

Lsimplifiedout(n,α)=1K(α)LD(n,α)Loutsimplified(n,α)=1K(α)LD(n,α)

Note that because we assumed that v⃗ =n⃗ v→=n→, we’re losing the “stretchy reflections” at grazing angles.

In essence, we’re filtering (convolving) the IBL by a simplified BRDF that doesn’t affect the average irradiance (D.C. term of IBL) thanks to the normalization factor K(α)K(α), then we scale the result by the magnitude of the radiance corresponding to a constant irradiance of value 1.0:

radianceout=(BRDFL¯)×(BRDFsimplifiedL)radianceout=(BRDF∗L⊥¯)×(BRDFsimplified∗L⊥)

An interesting point to note is that if we simplified the BRDF a bit more by assuming no fresnel and no shadowing/masking, i.e. F()=V()=1F()=V()=1 we would find the expression of Brian Karis’s “split-sum” approximation, and K(α)K(α) would match Karis’s empirical normalization factor exactly.

Discrete Domain

Recall that we have:

Lout(n,v,α,f0,f90)DFV1(nv,α)DFV2(nv,α)LDv=n(n,α)Kv=n(α)=[f0DFV1(nv,α)+f90DFV2(nv,α)]×1K(α)LD(n,α)=Ω(1Fc(h))D(l,v,α)V(l,v,α)nldl=ΩFc(h)D(l,v,α)V(l,v,α)nldl=Ω(1Fc(h))V(h,α)D(h,α)nlL(l)dl=Ω(1Fc(h))V(h,α)D(h,α)nldlLout(n,v,α,f0,f90)=[f0DFV1(n⋅v,α)+f90DFV2(n⋅v,α)]×1K(α)LD(n,α)DFV1(n⋅v,α)=∫Ω(1−Fc(h))D(l,v,α)V(l,v,α)⟨n⋅l⟩dlDFV2(n⋅v,α)=∫ΩFc(h)D(l,v,α)V(l,v,α)⟨n⋅l⟩dlLDv=n(n,α)=∫Ω(1−Fc(h))V(h,α)D(h,α)⟨n⋅l⟩L⊥(l)dlKv=n(α)=∫Ω(1−Fc(h))V(h,α)D(h,α)⟨n⋅l⟩dl

Converting the DFVDFV and LDLD terms defined above into the discrete domain, using importance sampling (see [Importance Sampling] for the IBL):

DFV1(n,v,α)DFV2(n,v,α)K(α)LD(n,α)=4NiN(1Fc(h))V(li,v,α)vhinhinli=4NiNFc(h)V(li,v,α)vhinhinli=1NiN(1Fc(h))V(h,α)D(h,α)D(h,α)J(h)nhinli=4NiN(1Fc(h))V(h,α)nli=1K(α)4NiN(1Fc(h))V(h,α)L(l)nli=Ni(1Fc(h))V(h,α)nliL(l)Ni(1Fc(h))V(h,α)nliDFV1(n,v,α)=4N∑iN(1−Fc(h))V(li,v,α)⟨v⋅hi⟩⟨n⋅hi⟩⟨n⋅li⟩DFV2(n,v,α)=4N∑iNFc(h)V(li,v,α)⟨v⋅hi⟩⟨n⋅hi⟩⟨n⋅li⟩K(α)=1N∑iN(1−Fc(h))V(h,α)D(h,α)D(h,α)J(h)⟨n⋅hi⟩⟨n⋅li⟩=4N∑iN(1−Fc(h))V(h,α)⟨n⋅li⟩LD(n,α)=1K(α)4N∑iN(1−Fc(h))V(h,α)L⊥(l)⟨n⋅li⟩=∑iN(1−Fc(h))V(h,α)⟨n⋅li⟩L⊥(l)∑iN(1−Fc(h))V(h,α)⟨n⋅li⟩

Both DFV1DFV1 and DFV2DFV2 can either be pre-calculated in a regular 2D texture indexed by (nv,α)(n⋅v,α)and sampled bilinearly, or computed at runtime using an analytic approximation of the surfaces. See sample code in the annex. The pre-calculated textures are shown in table 15.

DFG1DFG1 DFG2DFG2 DFG1,DFG2,0DFG1,DFG2,0
Table 15: Y axis: αα. X axis: cosθcosθ

DFV1DFV1 and DFV2DFV2 are conveniently within the [0,1][0,1] range however 8-bits textures can cause problems. Unfortunately, on mobile, 16-bits or float textures are not ubiquitous and there are a limited number of samplers. Despite the attractive simplicity of the shader code using a texture, it might be better to use an analytic approximation. Note that since we only need to store two terms, OpenGL ES 3.0’s RG16F texture format is a good candidate.

Such analytic approximation is described in [Karis14], itself based on [Lazarov13]. [Narkowicz14] is another interesting approximation. Table 16 presents a visual representation of these approximations.

DFG1DFG1 DFG2DFG2 DFG1,DFG2,0DFG1,DFG2,0
Table 16: Y axis: αα. X axis: cosθcosθ

The LD term visualized

α=0.0α=0.0

α=0.2α=0.2

α=0.4α=0.4

0.60.6

0.80.8

Indirect specular and indirect diffuse components visualized

Figure 46 shows how indirect lighting interacts with dielectrics and conductors. Direct lighting was removed for illustration purposes.

Figure 46: Anisotropic reflections with varying roughness, metallicness, etc.

IBL evaluation implementation

Listing 22 presents a GLSL implementation to evaluate the IBL, using the various textures described in the previous sections.

vec3 ibl(vec3 n, vec3 v, vec3 diffuseColor, vec3 f0, float roughness) {
    vec3 r = reflect(n);
    vec3 Ld = textureCube(irradianceEnvMap, r) * diffuseColor;
    vec3 Lld = textureCube(prefilteredEnvMap, r, computeLODFromRoughness(roughness));
    vec2 Ldfg = texture2D(dfgLut, vec2(dot(n,v), roughness * roughness)).xy;
    vec3 Lr =  (f0 * Ldfg.x + Ldfg.y) * Lld;
    return Ld + Lr;
}
Listing 22: GLSL implementation of image based lighting evaluation

We can however save a couple of texture lookups by using [Spherical Harmonics] instead of an irradiance cubemap and the analytical approximation of the DFGDFG LUT, as shown in listing 23.

vec3 irradianceSH(vec3 n) {
    // uniform vec3 sphericalHarmonics[9]
    // We can use only the first 2 bands for better performance
    return
          sphericalHarmonics[0]
        + sphericalHarmonics[1] * (n.y)
        + sphericalHarmonics[2] * (n.z)
        + sphericalHarmonics[3] * (n.x)
        + sphericalHarmonics[4] * (n.y * n.x)
        + sphericalHarmonics[5] * (n.y * n.z)
        + sphericalHarmonics[6] * (3.0 * n.z * n.z - 1.0)
        + sphericalHarmonics[7] * (n.z * n.x)
        + sphericalHarmonics[8] * (n.x * n.x - n.y * n.y);
}

vec2 prefilteredDFG(float NoV, float roughness) {
    // Karis' approximation based on Lazarov's
    const vec4 c0 = vec4(-1.0, -0.0275, -0.572,  0.022);
    const vec4 c1 = vec4( 1.0,  0.0425,  1.040, -0.040);
    vec4 r = roughness * c0 + c1;
    float a004 = min(r.x * r.x, exp2(-9.28 * NoV)) * r.x + r.y;
    return vec2(-1.04, 1.04) * a004 + r.zw;
    // Zioma's approximation based on Karis
    // return vec2(1.0, pow(1.0 - max(roughness, NoV), 3.0));
}

vec3 evaluateSpecularIBL(vec3 r, float roughness) {
    // This assumes a 256x256 cubemap, with 9 mip levels
    float lod = 8.0 * roughness;
    // decodeEnvironmentMap() either decodes RGBM or is a no-op if the
    // cubemap is stored in a float texture
    return decodeEnvironmentMap(textureCubeLodEXT(environmentMap, r, lod));
}

vec3 evaluateIBL(vec3 n, vec3 v, vec3 diffuseColor, vec3 f0, float roughness) {
    float NoV = max(dot(n, v), 0.0);
    vec3 r = reflect(-v, n);

    // Specular indirect
    vec3 indirectSpecular = evaluateSpecularIBL(r, roughness);
    vec2 env = prefilteredDFG(NoV, roughness);
    vec3 specularColor = f0 * env.x + env.y;

    // Diffuse indirect
    // We multiply by the Lambertian BRDF to compute radiance from irradiance
    // With the Disney BRDF we would have to remove the Fresnel term that
    // depends on NoL (it would be rolled into the SH)
    vec3 indirectDiffuse = max(irradianceSH(n), 0.0) * Fd_Lambert();

    // Indirect contribution
    return diffuseColor * indirectDiffuse + indirectSpecular * specularColor;
}
Listing 23: GLSL implementation of image based lighting evaluation
5 ΘΘ represents the parameters of the material model ff, i.e.: roughness, albedo and so on…
6 This can be done through blending of static probes or by spreading the workload over time
7 The Lambertian BRDF doesn’t depend on l⃗ l→v⃗ v→ or θθ, so Ld(n,v,θ)Ld(n,σ)Ld(n,v,θ)≡Ld(n,σ)
8 ΩsΩs can be approximated by 2π6widthheight2π6⋅width⋅height for a cubemap
9 O(12n2m2)O(12n2m2), with nn and mm respectively the dimensions of the environment and the precomputed cubemap

Clear coat

When sampling the IBL, the clear coat layer is calculated as a second specular lobe. This specular lobe is oriented along the view direction since we cannot reasonably integrate over the hemisphere. Listing 24 demonstrates this approximation in practice. It also shows the energy conservation step. It is important to note that this second specular lobe is computed exactly the same way as the main specular lobe, using the same DFG approximation.

float Fc = F_Schlick(0.04, 1.0, shading_NoV) * clearCoat;
// base layer attenuation for energy compensation
iblDiffuse  *= 1.0 - Fc;
iblSpecular *= sq(1.0 - Fc);
iblSpecular += specularIBL(r, clearCoatRoughness) * Fc;
Listing 24: GLSL implementation of the clear coat specular lobe for image-based lighting

Anisotropy

[McAuley15] describes a technique called “bent reflection vector”, based [Revie12]. The bent reflection vector is a rough approximation of anisotropic lighting but the alternative is to use importance sampling. This approximation is sufficiently cheap to compute and provides good results, as shown in figure 47 and figure 48.

Figure 47: Anisotropic indirect specular reflections using bent normals (left: roughness 0.3, right: roughness: 0.0; both: anisotropy 1.0)

Figure 48: Anisotropic reflections with varying roughness, metallicness, etc.

The implementation of this technique is straightforward, as demonstrated in listing 25.

vec3 anisotropicTangent = cross(bitangent, v);
vec3 anisotropicNormal = cross(anisotropicTangent, bitangent);
vec3 bentNormal = normalize(mix(n, anisotropicNormal, anisotropy));
vec3 r = reflect(-v, bentNormal);
Listing 25: GLSL implementation of the bent reflection vector

This technique can be made more useful by accepting negative anisotropy values, as shown in listing 26. When the anisotropy is negative, the highlights are not in the direction of the tangent, but in the direction of the bitangent instead.

vec3 anisotropicDirection = anisotropy >= 0.0 ? bitangent : tangent;
vec3 anisotropicTangent = cross(anisotropicDirection, v);
vec3 anisotropicNormal = cross(anisotropicTangent, anisotropicDirection);
vec3 bentNormal = normalize(mix(n, anisotropicNormal, anisotropy));
vec3 r = reflect(-v, bentNormal);
Listing 26: GLSL implementation of the bent reflection vector

Figure 49 demonstrates this modified implementation in practice.

Figure 49: Control of the anisotropy direction using positive (left) and negative (right) values

Subsurface

[TODO] Explain subsurface and IBL

Cloth

The IBL implementation for the cloth material model is more complicated than for the other material models. The main difference stems from the use of a different NDF (Ashikhmin vs height-correlated Smith GGX). As described in this section, we use the split-sum approximation to compute the DFG term of the BRDF when computing an IBL. Since this DFG term is based on the wrong NDF, we must find a new approximation.

The approximation we use is purely analytical and was manually fitted against a Monte-Carlo reference shown in figure 50 (using 222222 samples per data point instead of importance sampling). This visual comparison shows the significant impact the cloth NDF has on the BRDF. Using the standard DFG term would result in widely incorrect results.

Figure 50: DFG LUT (left) vs cloth DFG LUT (right)

Manual fitting was performed in Mathematica (as shown in figure 51) and while not perfect, the analytical approximation strikes a decent balance between correctness and runtime cost.

Figure 51: Manual fitting of the DFG term for the cloth NDF

Listing 28 shows the implementation of the DFG approximation. We also provide the Mathematica notebook containing the formulas of our approximation as well as comparisons to the reference LUT.

vec2 PrefilteredDFG_Cloth(float roughness, float NoV) {
    const vec4 c0 = vec4(0.24,  0.93, 0.01, 0.20);
    const vec4 c1 = vec4(2.00, -1.30, 0.40, 0.03);

    float s = 1.0 - NoV;
    float e = s - c0.y;
    float g = c0.x * exp2(-(e * e) / (2.0 * c0.z)) + s * c0.w;
    float n = roughness * c1.x + c1.y;
    float r = max(1.0 - n * n, c1.z) * g;

    return vec2(r, r * c1.w);
}
Listing 27: GLSL implementation of the DFG approximation for the cloth NDF

The remainder of the image-based lighting implementation follows the same steps as the implementation of regular lights, including the optional subsurface scattering term and its wrap diffuse component. The main difference lies in yet another approximation using the largest component of f0f0 to compute the Fresnel component as a scalar instead of a vector. Just as with the clear coat IBL implementation, we cannot integrate over the hemisphere and use the view direction as the dominant light direction to compute the Fresnel term and the wrap diffuse component.

float diffuse = Fd_Lambert() * ambientOcclusion;
#if defined(SHADING_MODEL_CLOTH)
diffuse *= (1.0 - F_Schlick(max3(f0), 1.0, NoV));
#if defined(MATERIAL_HAS_SUBSURFACE_COLOR)
diffuse *= saturate((NoV + 0.5) / 2.25);
#endif
#endif

vec3 indirectDiffuse = irradianceIBL(n) * diffuse;
#if defined(SHADING_MODEL_CLOTH) && defined(MATERIAL_HAS_SUBSURFACE_COLOR)
indirectDiffuse *= saturate(subsurfaceColor + NoV);
#endif

vec3 ibl = diffuseColor * indirectDiffuse + indirectSpecular * specularColor;
Listing 28: GLSL implementation of the DFG approximation for the cloth NDF

Static lighting

[TODO] Spherical-harmonics or spherical-gaussian lightmaps, irradiance volumes, PRT?…

Transparency and translucency lighting

Transparent and translucent materials are important to add realism and correctness to scenes. Filament must therefore provide lighting models for both types of materials to allow artists to properly recreate realistic scenes. Translucency can also be used effectively in a number of non-realistic settings.

Transparency

To properly light a transparent surface, we must first understand how the material’s opacity is applied. Observe a window and you will see that the diffuse reflectance is transparent. On the other hand, the brighter the specular reflectance, the less opaque the window appears. This effect can be seen in figure 52: the scene is properly reflected onto the glass surfaces but the specular highlight of the sun is bright enough to appear opaque.

Figure 52: Example of a complex object where lit surface transparency plays an important role

Figure 53: Example of a complex object where lit surface transparency plays an important role

To properly implement opacity, we will use the premultiplied alpha format. Given a desired opacity noted αopacityαopacity and a diffuse color σσ (linear, unpremultiplied), we can compute the effective opacity of a fragment.

coloropacity=σαopacity=αopacitycolor=σ∗αopacityopacity=αopacity

The physical interpretation is that the RGB components of the source color define how much light is emitted by the pixel, whereas the alpha component defines how much of the light behind the pixel is blocked by said pixel. We must therefore use the following blending functions:

BlendsrcBlenddst=1=1srcαBlendsrc=1Blenddst=1−srcα

The GLSL implementation of these equations is presented in listing 29.

// baseColor has already been premultiplied
vec4 shadeSurface(vec4 baseColor) {
    float alpha = baseColor.a;

    vec3 diffuseColor = evaluateDiffuseLighting();
    vec3 specularColor = evaluateSpecularLighting();    

    return vec4(diffuseColor + specularColor, alpha);
}
Listing 29: Implementation of lit surface transparency in GLSL

Translucency

Translucent materials can be divided into two categories:

  • Surface translucency
  • Volume translucency

Volume translucency is useful to light particle systems, for instance clouds or smoke. Surface translucency can be used to imitate materials with transmitted scattering such as wax, marble, skin, etc.

[TODO] Surface translucency (BRDF+BTDF, BSSRDF)

Figure 54: Front-lit translucent object (left) and back-lit translucent object (right), using approximated BTDF and BSSRDF. Model: Lucy from the Stanford University Computer Graphics Laboratory

Occlusion

Occlusion is an important darkening factor used to recreate shadowing at various scales:

Small scale Micro-occlusion used to handle creases, cracks and cavities.
Medium scale Macro-occlusion used to handle occlusion by an object’s own geometry or by geometry baked in normal maps (bricks, etc.).
Large scale Occlusion coming from contact between objects, or from an object’s own geometry.

We currently ignore micro-occlusion, which is often exposed in tools and engines under the form of a “cavity map”. Sébastien Lagarde offers an interesting discussion in [Lagarde14] on how micro-occlusion is handled in Frostbite: diffuse micro-occlusion is pre-baked in diffuse maps and specular micro-occlusion is pre-baked in reflectance textures. In our system, micro-occlusion can simply be baked in the base color map. This must be done knowing that the specular light will not be affected by micro-occlusion.

Medium scale ambient occlusion is pre-baked in ambient occlusion maps, exposed as a material parameter, as seen in the material parameterization section earlier.

Large scale ambient occlusion is often computed using screen-space techniques such as SSAO(screen-space ambient occlusion), HBAO (horizon based ambient occlusion), etc. Note that these techniques can also contribute to medium scale ambient occlusion when the camera is close enough to surfaces.

Note: to prevent over darkening when using both medium and large scale occlusion, Lagarde recommends to use min(AOmedium,AOlarge)min(AOmedium,AOlarge).

Diffuse occlusion

Morgan McGuire formalizes ambient occlusion in the context of physically-based rendering in [McGuire10]. In his formulation, McGuire defines an ambient illumination function LaLa, which in our case is encoded with spherical harmonics. He also defines a visibility function VV, with V(l)=1V(l)=1 if there is an unoccluded line of sight from the surface in direction ll, and 0 otherwise.

With these two functions, the ambient term of the rendering equation can be expressed as shown in equation 7272.

L(l,v)=Ωf(l,v)La(l)V(l)nldl(72)(72)L(l,v)=∫Ωf(l,v)La(l)V(l)⟨n⋅l⟩dl

This expression can be approximated by separating the visibility term from the illumination function, as shown in equation 7373.

L(l,v)(πΩf(l,v)La(l)dl)(1πΩV(l)nldl)(73)(73)L(l,v)≈(π∫Ωf(l,v)La(l)dl)(1π∫ΩV(l)⟨n⋅l⟩dl)

This approximation is only exact when the distant light LaLa is constant and ff is a Lambertian term. McGuire states however that this approximation is reasonable if both functions are relatively smooth over most of the sphere. This happens to be the case with a distant light probe (IBL).

The left term of this approximation is the pre-computed diffuse component of our IBL. The right term is a scalar factor between 0 and 1 that indicates the fractional accessibility of a point. Its opposite is the diffuse ambient occlusion term, show in equation 7474.

AO=11πΩV(l)nldl(74)(74)AO=1−1π∫ΩV(l)⟨n⋅l⟩dl

Since we use a pre-computed diffuse term, we cannot compute the exact accessibility of shaded points at runtime. To compensate for this lack of information in our precomputed term, we partially reconstruct incident lighting by applying an ambient occlusion factor specific to the surface’s material at the shaded point.

In practice, baked ambient occlusion is stored as a grayscale texture which can often be lower resolution than other textures (base color or normals for instance). It is important to note that the ambient occlusion property of our material model intends to recreate macro-level diffuse ambient occlusion. While this approximation is not physically correct, it constitutes an acceptable tradeoff of quality vs performance.

Figure 55 shows two different materials without and with diffuse ambient occlusion. Notice how the material ambient occlusion is used to recreate the natural shadowing that occurs between the different tiles. Without ambient occlusion, both materials appear too flat.

Figure 55: Comparison of materials without diffuse ambient occlusion (left) and with (right)

Applying baked diffuse ambient occlusion in a GLSL shader is straightforward, as shown in listing 30.

// diffuse indirect
vec3 indirectDiffuse = max(irradianceSH(n), 0.0) * Fd_Lambert();
// ambient occlusion
indirectDiffuse *= texture2D(aoMap, outUV).r;
Listing 30: Implementation of baked diffuse ambient occlusion in GLSL

Note how the ambient occlusion term is only applied to indirect lighting.

Specular occlusion

Specular micro-occlusion can be derived from f0f0, itself derived from the diffuse color. The derivation is based on the knowledge that no real-world material has a reflectance lower than 2%. Values in the 0-2% range can therefore be treated as pre-baked specular occlusion used to smoothly extinguish the Fresnel term.

float f90 = clamp(dot(f0, 50.0 * 0.33), 0.0, 1.0);
// cheap luminance approximation
float f90 = clamp(50.0 * f0.g, 0.0, 1.0);
Listing 31: Pre-baked specular occlusion in GLSL

The derivations mentioned earlier for ambient occlusion assume Lambertian surfaces and are only valid for indirect diffuse lighting. The lack of information about surface accessibility is particularly harmful to the reconstruction of indirect specular lighting. It usually manifests itself as light leaks.

Sébastien Lagarde proposes an empirical approach to derive the specular occlusion term from the diffuse occlusion term in [Lagarde14]. The result does not have any physical basis but produces visually pleasant results. The goal of his formulation is return the diffuse occlusion term unmodified for rough surfaces. For smooth surfaces, the formulation, implemented in listing 32, reduces the influence of occlusion at normal incidence and increases it at grazing angles.

float computeSpecularAO(float NoV, float ao, float roughness) {
    return clamp(pow(NoV + ao, exp2(-16.0 * roughness - 1.0)) - 1.0 + ao, 0.0, 1.0);
}

// specular indirect
vec3 indirectSpecular = evaluateSpecularIBL(r, roughness);
// ambient occlusion
float ao = texture2D(aoMap, outUV).r;
indirectSpecular *= computeSpecularAO(NoV, ao, roughness);
Listing 32: Implementation of Lagarde’s specular occlusion factor in GLSL

Note how the specular occlusion factor is only applied to indirect lighting.

Horizon specular occlusion

When computing the specular IBL contribution for a surface that uses a normal map, it is possible to end up with a reflection vector pointing towards the surface. If this reflection vector is used for shading directly, the surface will be lit in places where it should not be lit (assuming opaque surfaces). This is another occurrence of light leaking that can easily be minimized using a simple technique described by Jeff Russell [Russell15].

The key idea is to occlude light coming from behind the surface. This can easily be achieved since a negative dot product between the reflected vector and the surface’s normal indicates a reflection vector pointing towards the surface. Our implementation shown in listing 33 is similar to Russell’s, albeit without the artist controlled horizon fading factor.

// specular indirect
vec3 indirectSpecular = evaluateSpecularIBL(r, roughness);

// horizon occlusion with falloff, should be computed for direct specular too
float horizon = min(1.0 + dot(r, n), 1.0);
indirectSpecular *= horizon * horizon;
Listing 33: Implementation of horizon specular occlusion in GLSL

Horizon specular occlusion fading is cheap but can easily be omitted to improve performance as needed.

Normal mapping

There are two common use cases of normal maps: replacing high-poly meshes with low-poly meshes (using a base map) and adding surface details (using a detail map).

Let’s imagine that we want to render a piece of furniture covered in tufted leather. Modeling the geometry to accurately represent the tufted pattern would require too many triangles so we instead bake a high-poly mesh into a normal map. Once the base map is applied to a simplified mesh (in this case, a quad), we get the result in figure 56. The base map used to create this effect is shown in figure 57.

Figure 56: Low-poly mesh without normal mapping (left) and with (right)

Figure 57: Normal map used as a base map

A simple problem arises if we now want to combine this base map with a second normal map. For instance, let’s use the detail map shown in figure 58 to add cracks in the leather.

Figure 58: Normal map used as a detail map

Given the nature of normal maps (XYZ components stored in tangent space), it is fairly obvious that naive approaches such as linear or overlay blending cannot work. We will use two more advanced techniques: a mathematically correct one and an approximation suitable for real-time shading.

Reoriented normal mapping

Colin Barré-Brisebois and Stephen Hill propose in [Hill12] a mathematically sound solution called Reoriented Normal Mapping, which consists in rotating the basis of the detail map onto the normal from the base map. This technique relies on the shortest arc quaternion to apply the rotation, which greatly simplifies thanks to the properties of the tangent space.

Following the simplificationss described in [Hill12], we can produce the GLSL implementation shown in listing 34.

vec3 t = texture(baseMap,   uv).xyz * vec3( 2.0,  2.0, 2.0) + vec3(-1.0, -1.0,  0.0);
vec3 u = texture(detailMap, uv).xyz * vec3(-2.0, -2.0, 2.0) + vec3( 1.0,  1.0, -1.0);
vec3 r = normalize(t * dot(t, u) - u * t.z);
return r;
Listing 34: Implementation of reoriented normal mapping in GLSL

Note that this implementation assumes that the normals are stored uncompressed and in the [0..1] range in the source textures.

The normalization step is not strictly necessary and can be skipped if the technique is used at runtime. If so, the computation of r becomes t * dot(t, u) / t.z - u.

Since this technique is slightly more expensive than the one described below, we will mostly use it offline. We therefore provide a simple offline tool to combine two normal maps. Figure 59presents the output of the tool with the base map and the detail map shown previously.

Figure 59: Blended normal and detail map (left) and resulting render when combined with a diffuse map (right)

UDN blending

The technique called UDN blending, described in [Hill12], is a variant of the partial derivative blending technique. Its main advantage is the low number of shader instructions it requires (see listing 35). While it leads to a reduction in details over flat areas, UDN blending is interesting if blending must be performed at runtime.

vec3 t = texture(baseMap,   uv).xyz * 2.0 - 1.0;
vec3 u = texture(detailMap, uv).xyz * 2.0 - 1.0;
vec3 r = normalize(t.xy + u.xy, t.z);
return r;
Listing 35: Implementation of UDN blending in GLSL

The results are visually close to Reoriented Normal Mapping but a careful comparison of the data shows that UDN is indeed less correct. Figure 60 presents the result of the UDN blending approach using the same source data as in the previous examples.

Figure 60: Blended normal and detail map using the UDN blending technique

Volumetric effects

Exponential height fog

Figure 61: Example of directional in-scattering with exponential height fog

Figure 62: Example of directional in-scattering with exponential height fog

Anti-aliasing

[TODO] MSAA, geometric AA (normals and roughness), shader anti-aliasing (object-space shading?)

Imaging pipeline

The lighting section of this document describes how light interacts with surfaces in the scene in a physically-based manner. To achieve plausible results, we must go a step further and consider the transformations necessary to convert the scene luminance, as computed by our lighting equations, into displayable pixel values.

The series of transformations we are going to use form the following imaging pipeline:

SceneNormalizedluminanceluminanceWhitebalance(HDR)ColorgradingTonemappingPixelOETFvalue(LDR)

Note: the OETF step is the application of the opto-electronic transfer function of the target color space. For clarity this diagram does not include post-processing steps such as vignette, bloom, etc. These effects will be discussed separately.

[TODO] Color spaces (ACES, sRGB, Rec. 709, Rec. 2020, etc.), gamma/linear, etc.

Physically-based camera

The first step in the image transformation process is to use a physically-based camera to properly expose the scene’s outgoing luminance.

Exposure settings

Because we use photometric units throughout the lighting pipeline, the light reaching the camera is an energy expressed in luminance LL, in cd.m2cd.m−2. Light incident to the camera sensor can cover a large range of values, from 105cd.m210−5cd.m−2 for starlight to 109cd.m2109cd.m−2 for the sun. Since we obviously cannot manipulate and even less record such a large range of values, we need to remap them.

This range remapping is done in a camera by exposing the sensor for a certain time. To maximize the use of the limited range of the sensor, the scene’s light range is centered around the “middle grey”, a value halfway between black and white. The exposition is therefore achieved by manipulating, either manually or automatically, 3 settings:

  • Aperture
  • Shutter speed
  • Sensitivity (also called gain)
Aperture
Noted NN and expressed in f-stops ƒ, this setting controls how open or closed the camera system’s aperture is. Since an f-stop indicate the ratio of the lens’ focal length to the diameter of the entrance pupil, high-values (ƒ/16) indicate a small aperture and small values (ƒ/1.4) indicate a wide aperture. In addition to the exposition, the aperture setting controls the depth of field.

Shutter speed
Noted tt and expressed in seconds ss, this setting controls how long the aperture remains opened (it also controls the timing of the sensor shutter(s), whether electronic or mechanical). In addition to the exposition, the shutter speed controls motion blur.

Sensitivity
Noted SS and expressed in ISO, this setting controls how the light reaching the sensor is quantized. Because of its unit, this setting is often referred to as simply the “ISO” or “ISO setting”. In addition to the exposition, the sensitivity setting controls the amount of noise.

Exposure value

Since referring to these 3 settings in our equations would be unwieldy, we instead summarize the “exposure triangle” by an exposure value, noted EV10.

The EV is expressed in a base-2 logarithmic scale, with a difference of 1 EV called a stop. One positive stop (+1 EV) corresponds to a factor of two in luminance and one negative stop (−1 EV) corresponds to a factor of half in luminance.

Equation 7575 shows the formal definition of EV.

EV=log2(N2t)(75)(75)EV=log2(N2t)

Note that this definition is only function of the aperture and shutter speed, but not the sensitivity. An exposure value is by convention defined for ISO 100, or EV100EV100, and because we wish to work with this convention, we need to be able to express EV100EV100 as a function of the sensitivity.

Since we know that EV is a base-2 logarithmic scale in which each stop increases or decreases the brightness by a factor of 2, we can formally define EVSEVS, the exposure value at given sensitivity (equation 7676).

EVS=EV100+log2(S100)(76)(76)EVS=EV100+log2(S100)

Calculating the EV100EV100 as a function of the 3 camera settings is trivial, as shown in 7777.

EV100=EVSlog2(S100)=log2(N2t)log2(S100)(77)(77)EV100=EVS−log2(S100)=log2(N2t)−log2(S100)

Note that the operator (photographer, etc.) can achieve the same exposure (and therefore EV) with several combinations of aperture, shutter speed and sensitivity. This allows some artistic control in the process (depth of field vs motion blur vs grain).

10 We assume a digital sensor, which means we don’t need to take reciprocity failure into account

Exposure value and luminance

A camera, similar to a spot meter, is able to measure the average luminance of a scene and convert it into EV to achieve automatic exposure, or at the very least offer the user exposure guidance.

It is possible to define EV as a function of the scene luminance LL, given a per-device calibration constant KK (equation 7878).

EV=log2(L×SK)(78)(78)EV=log2(L×SK)

That constant KK is the reflected-light meter constant, which varies between manufacturers. We could find two common values for this constant: 12.5, used by Canon, Nikon and Sekonic, and 14, used by Pentax and Minolta. Given the wide availability of Canon and Nikon cameras, as well as our own usage of Sekonic light meters, we will choose to use K=12.5K=12.5.

Since we want to work with EV100EV100, we can subsitute KK and SS in equation 7878 to obtain equation 7979.

EV=log2(L10012.5)(79)(79)EV=log2(L10012.5)

Given this relationship, it would be possible to implement automatic exposure in our engine by first measuring the average luminance of a frame. An easy way to achieve this is to simply downsample a luminance buffer down to 1 pixel and read the remaining value. This technique is unfortunately rarely stable and can easily be affected by extreme values. Many games use a different approach which consists in using a luminance histogram to remove extreme values.

For validation and testing purposes, the luminance can be computed from a given EV:

L=2EV100×12.5100=2EV1003(80)(80)L=2EV100×12.5100=2EV100−3

Exposure value and illuminance

It is possible to define EV as a function of the illuminance EE, given a per-device calibration constant CC:

EV=log2(E×SC)(81)(81)EV=log2(E×SC)

The constant CC is the incident-light meter constant, which varies between manufacturers and/or types of sensors. There are two common types of sensors: flat and hemispherical. For flat sensors, a common value is 250. With hemispherical sensors, we could find two common values: 320, used by Minolta, and 340, used by Sekonic.

Since we want to work with EV100EV100, we can subsitute SS 8181 to obtain equation 8282.

EV=log2(E100C)(82)(82)EV=log2(E100C)

The illuminance can then be computed from a given EV. For a flat sensor with C=250C=250 we obtain equation 8383.

E=2EV100×2.5(83)(83)E=2EV100×2.5

For a hemispherical sensor with C=340C=340 we obtain equation 8484

E=2EV100×3.4(84)(84)E=2EV100×3.4

Exposure compensation

Even though an exposure value actually indicates combinations of camera settings, it is often used by photographers to describe light intensity. This is why cameras let photographers apply an exposure compensation to over or under-expose an image. This setting can be used for artistic control but also to achieve proper exposure (snow for instance will be exposed for as 18% middle-grey).

Applying an exposure compensation ECEC is a simple as adding an offset to the exposure value, as shown in equation 8585.

EV100=EV100EC(85)(85)EV100′=EV100−EC

This equation uses a negative sign because we are using ECEC in f-stops to adjust the final exposure. Increasing the EV is akin to closing down the aperture of the lens (or reducing shutter speed or reducing sensitivity). A higher EV will produce darker images.

Exposure

To convert the scene luminance into normalized luminance, we must use the photometric exposure (or luminous exposure), or amount of scene luminance that reaches the camera sensor. The photometric exposure, expressed in lux seconds and noted HH, is given by equation 8686.

H=qtN2L(86)(86)H=q⋅tN2L

Where LL is the luminance of the scene, tt the shutter speed, NN the aperture and qq the lens and vignetting attenuation (typically q=0.65q=0.6511). This definition does not take the sensor sensitivity into account. To do so, we must use one of the three ways to relate photometric exposure and sensitivity: saturation-based speed, noise-based speed and standard output sensitivity.

We choose the saturation-based speed relation, which gives us HsatHsat, the maximum possible exposure that does not lead to clipped or bloomed camera output (equation 8787).

Hsat=78Ssat(87)(87)Hsat=78Ssat

We combine equations 8787 and 8686 in equation 8888 to compute the maximum luminance LmaxLmax that will saturate the sensor given exposure settings SSNN and tt.

Lmax=N2qt78S(88)(88)Lmax=N2q⋅t78S

This maximum luminance can then be used to normalize incident luminance LL as shown in equation 8989.

L=L1Lmax(89)(89)L′=L1Lmax

LmaxLmax can be simplified using equation 7575S=100S=100 and q=0.65q=0.65:

LmaxLmaxLmax=N2t78qS=2EV10078qS=2EV100×1.2Lmax=N2t78q⋅SLmax=2EV10078q⋅SLmax=2EV100×1.2

Listing 36 shows how the exposure term can be applied directly to the pixel color computed in a fragment shader.

// Computes the camera's EV100 from exposure settings
// aperture in f-stops
// shutterSpeed in seconds
// sensitivity in ISO
float exposureSettings(float aperture, float shutterSpeed, float sensitivity) {
    return log2((aperture * aperture) / shutterSpeed * 100.0 / sensitivity);
}

// Computes the exposure normalization factor from
// the camera's EV100
float exposure(ev100) {
    return pow(2.0, ev100) * 1.2;
}

float ev100 = exposureSettings(aperture, shutterSpeed, sensitivity);
float exposure = exposure(ev100);

vec4 color = evaluateLighting();
color.rgb *= exposure;
Listing 36: Implementation of exposure in GLSL

In practice the exposure factor can be pre-computed on the CPU to save shader instructions.

11 See Film Speed, Measurements and calculations on Wikipedia (https://en.wikipedia.org/wiki/Film_speed)

Automatic exposure

The process described above relies on artists setting the camera exposure settings manually. This can prove cumbersome in practice since camera movements and/or dynamic effects can greatly affect the scene’s luminance. Since we know how to compute the exposure value from a given luminance (see section 7.1.2.1), we can transform our camera into a spot meter. To do so, we need to measure the scene’s luminance.

There are two common techniques used to measure the scene’s luminance:

  • Luminance downsampling, by downsampling the previous frame successively until obtaining a 1×1 log luminance buffer that can be read on the CPU (this could also be achieved using a compute shader). The result is the average log luminance of the scene. The first downsampling must extract the luminance of each pixel first. This technique can be unstable and its output should be smoothed over time.
  • Using a luminance histogram, to find the average log luminance. This technique has an advantage over the previous one as it allows to ignore extreme values and offers more stable results.

Note that both methods will find the average luminance after multiplication by the albedo. This is not entirely correct but the alternative is to keep a luminance buffer that contains the luminance of each pixel before multiplication by the surface albedo. This is expensive both computationally and memory-wise.

These two techniques also limit the metering system to average metering, where each pixel has the same influence (or weight) over the final exposure. Cameras typically offer 3 modes of metering:

Spot metering
In which only a small circle in the center of the image contributes to the final exposure. That circle is usually 1 to 5% of the total image size.

Center-weighted metering
Gives more influence to scene luminance values located in the center of the screen.

Multi-zone or matrix metering
A metering mode that differs for each manufacturer. The goal of this mode is to prioritize exposure for the most important parts of the scene. This is often achieved by splitting the image into a grid and by classifying each cell (using focus information, min/max luminance, etc.). Advanced implementations attempt to compare the scene to a known dataset to achieve proper exposure (backlit sunset, overcast snowy day, etc.).

Spot metering

The weight ww of each luminance value to use when computing the scene luminance is given by equation 9090.

w(x,y)={10|px,ysx,y|sr|px,ysx,y|>sr(90)(90)w(x,y)={1|px,y−sx,y|≤sr0|px,y−sx,y|>sr

Where pp is the position of the pixel, ss the center of the spot and srsr the radius of the spot.

Center-weighted metering

w(x,y)=smooth(|px,yc|×2width)(91)(91)w(x,y)=smooth(|px,y−c|×2width)

Where cc is the center of the time and smooth()smooth() a smoothing function such as GLSL’s smoothstep().

Adaptation

To smooth the result of the metering, we can use equation 9292, an exponential feedback loop as described by Pattanaik et al. in [Pattanaik00].

Lavg=Lavg+(LLavg)×(1eΔtτ)(92)(92)Lavg=Lavg+(L−Lavg)×(1−e−Δt⋅τ)

Where ΔtΔt is the delta time from the previous frame and ττ a constant that controls the adaptation rate.

Bloom

Because the EV scale is almost perceptually linear, the exposure value is also often used as a light unit. This means we could let artists specify the intensity of lights or emissive surfaces using exposure compensation as a unit. The intensity of emitted light would therefore be relative to the exposure settings. Using exposure compensation as a light unit should be avoided whenever possible but can be useful to force (or cancel) a bloom effect around emissive surfaces independently of the camera settings (for instance, a light saber in a game should always bloom).

Figure 63: Saturated photosites on a sensor create a blooming effect in the bright parts of the scene

With cc the bloom color and EV100EV100 the current exposure value, we can easily compute the luminance of the bloom value as show in equation 9393.

EVbloom=EV100+ECLbloom=c×2EVbloom3(93)(93)EVbloom=EV100+ECLbloom=c×2EVbloom−3

Equation 9393 can be used in a fragment shader to implement emissive blooms, as shown in listing 37.

vec4 surfaceShading() {
    vec4 color = evaluateLights();
    // rgb = color, w = exposure compensation
    vec4 emissive = getEmissive();
    color.rgb += emissive.rgb * pow(2.0, ev100 + emissive.w - 3.0);
    color.rgb *= exposure;
    return color;
}
Listing 37: Implementation of emissive bloom in GLSL

Optics post-processing

Color fringing

[TODO]

Figure 64: Example of color fringing: look at the ear on the left or the chin at the bottom.

Lens flares

[TODO] Notes: there is a physically-based approach to generating lens flares, by tracing rays through the optical assembly of the lens, but we are going to use an image-based approach. This approach is cheaper and has a few welcome benefits such as free emitters occlusion and unlimited light sources support.

Filmic post-processing

[TODO] Perform post-processing on the scene referred data (linear space, before tone-mapping) as much as possible

It is important to provide color correction tools to give artists greater artistic control over the final image. These tools are found in every photo or video processing application, such as Adobe Photoshop or Adobe After Effects.

Contrast

Curves

Levels

Color grading

Light path

The light path, or rendering method, used by the engine can have serious performance implications and may impose strong limitations on how many lights can be used in a scene. There are traditionally two different rendering methods used by 3D engines forward and deferred rendering.

Our goal is to use a rendering method that obeys the following constraints:

  • Low bandwidth requirements
  • Multiple dynamic lights per pixel

Additionally, we would like to easily support:

  • MSAA
  • Transparency
  • Multiple material models

Deferred rendering is used by many modern 3D rendering engines to easily support dozens, hundreds or even thousands of light source (amongst other benefits). This method is unfortunately very expensive in terms of bandwidth. With our default PBR material model, our G-buffer would use between 160 and 192 bits per pixel, which would translate directly to rather high bandwidth requirements.

Forward rendering methods on the other hand have historically been bad at handling multiple lights. A common implementation is to render the scene multiple times, once per visible light, and to blend (add) the results. Another technique consists in assigning a fixed maximum of lights to each object in the scene. This is however impractical when objects occupy a vast amount of space in the world (building, road, etc.).

Tiled shading can be applied to both forward and deferred rendering methods. The idea is to split the screen in a grid of tiles and for each tile, find the list of lights that affect the pixels within that tile. This has the advantage of reducing overdraw (in deferred rendering) and shading computations of large objects (in forward rendering). This technique suffers however from depth discontinuities issues that can lead to large amounts of extraneous work.

The scene displayed in figure 65 was rendered using clustered forward rendering.

Figure 65: Clustered forward rendering with dozens of dynamic lights and MSAA

Figure 66 shows the same scene split in tiles (in this case, a 1280×720 render target with 80×80px tiles).

Figure 66: Tiled shading (16×9 tiles)

Clustered Forward Rendering

We decided to explore another method called Clustered Shading, in its forward variant. Clustered shading expands on the idea of tiled rendering but adds a segmentation on the 3rd axis. The “clustering” is done in view space, by splitting the frustum into a 3D grid.

The frustum is first sliced on the depth axis as show in figure 67.

Figure 67: Depth slicing (16 slices)

And the depth slices are then combined with the screen tiles to “voxelize” the frustum. We call each cluster a froxel as it makes it clear what they represent (a voxel in frustum space). The result of the “froxelization” pass is shown in figure 68 and figure 69.

Figure 68: Frustum voxelization (5×3 tiles, 8 depth slices)

Figure 69: Frustum voxelization (5×3 tiles, 8 depth slices)

Before rendering a frame, each light in the scene is assigned to any froxel it intersects with. The result of the lights assignment pass is a list of lights for each froxel. During the rendering pass, we can compute the ID of the froxel a fragment belongs to and therefore the list of lights that can affect that fragment.

The depth slicing is not linear, but exponential. In a typical scene, there will be more pixels close to the near plane than to the far plane. An exponential grid of froxels will therefore improve the assignment of lights where it matters the most.

Figure 70 shows how much world space unit each depth slice uses with exponential slicing.

Figure 70: Near: 0.1m, Far: 100m, 16 slices

A simple exponential voxelization is unfortunately not enough. The graphic above clearly illustrates how world space is distributed across slices but it fails to show what happens close to the near plane. If we examine the same distribution in a smaller range (0.1m to 7m) we can see an interesting problem appear as shown in figure 71.

Figure 71: Depth distribution in the 0.1-7m range

This graphic shows that a simple exponential distribution uses up half of the slices very close to the camera. In this particular case, we use 8 slices out of 16in the first 5 meters. Since dynamic world lights are either point lights (spheres) or spot lights (cones), such a fine resolution is completely unnecessary so close to the near plane.

Our solution is to manually tweak the size of the first froxel depending on the scene and the near and far planes. By doing so, we can better distribute the remaining froxels across the frustum. Figure 72 shows for instance what happens when we use a special froxel between 0.1m and 5m.

Figure 72: Near: 0.1, Far: 100m, 16 slices, Special froxel: 0.1-5m

This new distribution is much more efficient and allows a better assignment of the lights throughout the entire frustum.

Implementation notes

Lights assignment can be done in two different ways, on the GPU or on the CPU.

GPU lights assignment

This implementation requires OpenGL ES 3.1 and support for compute shaders. The lights are stored in Shader Storage Buffer Objects (SSBO) and passed to a compute shader that assigns each light to the corresponding froxels.

The frustum voxelization can be executed only once by a first compute shader (as long as the projection matrix does not change), and the lights assignment can be performed each frame by another compute shader.

The threading model of compute shaders is particularly well suited for this task. We simply invoke as many workgroups as we have froxels (we can directly map the X, Y and Z workgroup counts to our froxel grid resolution). Each workground will in turn be threaded and traverse all the lights to assign.

Intersection tests imply simple sphere/frustum or cone/frustum tests.

See the annex for the source code of a GPU implementation (point lights only).

CPU lights assignment

On non-OpenGL ES 3.1 devices, lights assignment can be performed efficiently on the CPU. The algorithm is different from the GPU implementation. Instead of iterating over every light for each froxel, the engine will “rasterize” each light as froxels. For instance, given a point light’s center and radius, it is trivial to compute the list of froxels it intersects with.

This technique has the added benefit of providing tighter culling than in the GPU variant. The CPU implementation can also more easily generate a packed list of lights.

Shading

The list of lights per froxel can be passed to the fragment shader either as an SSBO (OpenGL ES 3.1) or a texture.

From depth to froxel

Given a near plane nn, a far plane ff, a maximum number of depth slices mm and a linear depth value zz in the range [0..1], equation 9494 can be used to compute the index of the cluster for a given position.

zToCluster(z,n,f,m)=floor(max(log2(z)mlog2(nf)+m,0))(94)(94)zToCluster(z,n,f,m)=floor(max(log2(z)m−log2(nf)+m,0))

This formula suffers however from the resolution issue mentioned previously. We can fix it by introducing snsn, a special near value that defines the extent of the first froxel (the first froxel occupies the range [n..sn], the remaining froxels [sn..f]).

zToCluster(z,n,sn,f,m)=floor(max(log2(z)m1log2(snf)+m,0))(95)(95)zToCluster(z,n,sn,f,m)=floor(max(log2(z)m−1−log2(snf)+m,0))

Equation 9696 can be used to compute a linear depth value from gl_FragCoord.z (assuming a standard OpenGL projection matrix).

linearZ(z)=nf+z(nf)(96)(96)linearZ(z)=nf+z(n−f)

This equation can be simplified by pre-computing two terms c0c0 and c1c1, as shown in equation 9797.

c1=fnc0=1c1linearZ(z)=1zc0+c1(97)(97)c1=fnc0=1−c1linearZ(z)=1z⋅c0+c1

This simplification is important because we pass the linear z value to a log2 in 9595. Since the division becomes a negation under a logarithmic, we can avoid a division by using log2(zc0+c1)−log2(z⋅c0+c1) instead.

All put together, computing the froxel index of a given fragment can be implemented fairly easily as shown in listing 38.

#define MAX_LIGHT_COUNT 16 // max number of lights per froxel

uniform uvec4 froxels; // res x, res y, count y, count y
uniform vec4 zParams;  // c0, c1, index scale, index bias

uint getDepthSlice() {
    return uint(max(0.0, log2(zParams.x * gl_FragCoord.z + zParams.y) *
            zParams.z + zParams.w));
}

uint getFroxelOffset(uint depthSlice) {
    uvec2 froxelCoord = uvec2(gl_FragCoord.xy) / froxels.xy;
    froxelCoord.y = (froxels.w - 1u) - froxelCoord.y;

    uint index = froxelCoord.x + froxelCoord.y * froxels.z +
            depthSlice * froxels.z * froxels.w;
    return index * MAX_FROXEL_LIGHT_COUNT;
}

uint slice = getDepthSlice();
uint offset = getFroxelOffset(slice);

// Compute lighting...
Listing 38: GLSL implementation to compute a froxel index from a fragment’s screen coordinates

Several uniforms must be pre-computed for perform the index evaluation efficiently. The code used to pre-compute these uniforms can be found in listing ?.

froxels[0] = TILE_RESOLUTION_IN_PX;
froxels[1] = TILE_RESOLUTION_IN_PX;
froxels[2] = numberOfTilesInX;
froxels[3] = numberOfTilesInY;

zParams[0] = 1.0f - Z_FAR / Z_NEAR;
zParams[1] = Z_FAR / Z_NEAR;
zParams[2] = (MAX_DEPTH_SLICES - 1) / log2(Z_SPECIAL_NEAR / Z_FAR);
zParams[3] = MAX_DEPTH_SLICES;
Listing ?

From froxel to depth

Given a froxel index ii, a special near plane snsn, a far plane ff and a maximum number of depth slices mm, equation 9898 computes the minimum depth of a given froxel.

clusterToZ(i1,sn,f,m)=2(im)log2(snf)m1(98)(98)clusterToZ(i≥1,sn,f,m)=2(i−m)−log2(snf)m−1

For i=0i=0, the z value is 0. The result of this equation is in the [0..1] range and should be multiplied by ff to get a distance in world units.

The compute shader implementation should use exp2 instead of a pow. The division can be precomputed and passed as a uniform.

Validation

Given the complexity of our lighting system, it is important to validate our implementation. We will do so in several ways: using reference renderings, light measurements and data visualization.

[TODO] Explain light measurement validation (reading EV from the render target and comparing against values measure with light meters/cameras, etc.)

Scene referred visualization

A quick and easy way to validate a scene’s lighting is to modify the shader to output colors that provide an intuitive mapping to relevant data. This can easily be done by using a custom debug tone-mapping operator that outputs fake colors.

Luminance stops

With emissive materials and IBLs, it is fairly easy to obtain a scene in which specular highlights are brighter than their apparent caster. This type of issue can be difficult to observe after tone-mapping and quantization but is fairly obvious in the scene-referred space. Figure 73 shows how the custom operator described in listing 39 is used to show the exposed luminance of a scene.

Figure 73: Visualizing luminance by color coding the stops: cyan is middle gray, blue is 1 stop darker, green 1 stop brighter, etc.
vec3 Tonemap_DisplayRange(const vec3 x) {
    // The 5th color in the array (cyan) represents middle gray (18%)
    // Every stop above or below middle gray causes a color shift
    float v = log2(luminance(x) / 0.18);
    v = clamp(v + 5.0, 0.0, 15.0);
    int index = int(floor(v));
    return mix(debugColors[index], debugColors[min(15, index + 1)], fract(v));
}

const vec3 debugColors[16] = vec3[](
     vec3(0.0, 0.0, 0.0),         // black
     vec3(0.0, 0.0, 0.1647),      // darkest blue
     vec3(0.0, 0.0, 0.3647),      // darker blue
     vec3(0.0, 0.0, 0.6647),      // dark blue
     vec3(0.0, 0.0, 0.9647),      // blue
     vec3(0.0, 0.9255, 0.9255),   // cyan
     vec3(0.0, 0.5647, 0.0),      // dark green
     vec3(0.0, 0.7843, 0.0),      // green
     vec3(1.0, 1.0, 0.0),         // yellow
     vec3(0.90588, 0.75294, 0.0), // yellow-orange
     vec3(1.0, 0.5647, 0.0),      // orange
     vec3(1.0, 0.0, 0.0),         // bright red
     vec3(0.8392, 0.0, 0.0),      // red
     vec3(1.0, 0.0, 1.0),         // magenta
     vec3(0.6, 0.3333, 0.7882),   // purple
     vec3(1.0, 1.0, 1.0)          // white
);
Listing 39: GLSL implementation of a custom debug tone-mapping operator for luminance visualization

Reference renderings

To validate our implementation against reference renderings, we will use a commercial-grade Open Source physically-based offline path tracer called Mitsuba. Mitsuba offers many different integrators, samplers and material models, which should allow us to provide fair comparisons with our real-time renderer. This path tracer also relies on a simple XML scene description format that should be easy to automatically generate from our own scene descriptions.

Figure 74 and figure 75 show a simple scene, a perfectly smooth dielectric sphere, rendered respectively with Mitsuba and Filament.

Figure 74: Rendered in 2048×1440 in 1 minute and 42 seconds on a 12 core 2013 MacPro

Figure 75: Rendered in 2048×1440 with MSAA 4x at 60 fps on a Nexus 9 device (Tegra K1 GPU)

The parameters used to render both scenes are the following:

Filament

  • Material
    • Base color: sRGB 0.81, 0, 0
    • Metallic: 0
    • Roughness: 0
    • Reflectance: 0.5
  • Indirect light: IBL
    • 256×256 cubemap generated by cmgen from office.exr
    • Multiplier: 35,000
  • Direct light: directional light
    • Linear color: 1.0, 0.96, 0.95
    • Intensity: 120,000 lux
  • Exposure
    • Aperture: f/16
    • Shutter speed: 1/125s
    • ISO: 100

Mitsuba

  • BSDF: roughplastic
    • Distribution: GGX
    • Alpha: 0
    • Diffuse reflectance: sRGB 0.81, 0, 0
  • Emitter: environment map
    • Source: office.exr
    • Scale: 35,000
  • Emitter: directional
    • Irradiance: linear RGB 120,000 115,200 114,000
  • Film: LDR
    • Exposure: −15.23, computed from log2(filamentExposure)
  • Integrator: path
  • Sampler: ldsampler
    • Sample count: 256

The full Mitsuba scene can be found as an annex. Both scenes were rendered at the same resolution (2048×1440).

Comparison

The slight differences between the two renderings come from the various approximations used by Filament: RGBM 256×256 reflection probe, RGBM 1024×1024 background map, Lambert diffuse, split-sum approximation, analytical approximation of the DFG term, etc.

Figure 76 shows the luminance gradient of the images produced by both engines. The comparison was performed on LDR images.

Figure 76: Luminance gradients from Mitsuba (left) and Filament (right)

The biggest difference is visible at grazing angles, which is most likely explained by Filament’s use of a Lambertian diffuse term. The Disney diffuse term and its grazing retro-reflections would move Filament closer to Mitsuba.

Coordinates systems

Main coordinates system

Filament uses a Y-up, right-handed coordinate system.

Figure 77: Red +X, green +Y, blue +Z (rendered in Marmoset Toolbag).

Cubemaps cooordinates system

All the cubemaps used in Filament (environment background, reflection probes, etc.) will follow the OpenGL convention for faces alignment show in figure 78.

Figure 78: Horizontal cross representation of a cubemap following the OpenGL faces alignment convention.

Equirectangular environment maps

To convert equirectangular environment maps to horizontal/vertical cross cubemaps we position the +Z face in the center of the source rectilinear environment map.

Mirroring

To simplify the rendering of reflections, cubemaps will be stored mirrored on the X axis. This means that cubemaps used as environment backgrounds need to be mirrored again at runtime. An easy way to achieve for skyboxes is to use textured back faces.

Annex

Importance sampling for the IBL

In the discrete domain, the integral can be approximated with sampling as defined in equation 9999.

Lout(n,v,Θ)1NiNf(luniformi,v,Θ)L(li)nluniformi(99)(99)Lout(n,v,Θ)≡1N∑iNf(liuniform,v,Θ)L⊥(li)⟨n⋅liuniform⟩

Unfortunately, we would need too many samples to evaluate this integral. A technique commonly used is to choose samples that are more “important” more often, this is called importance sampling. In our case we’ll use the probability density function (PDF) of the BRDF as the distribution of samples.

The evaluation of Lout(n,v,Θ)Lout(n,v,Θ) with importance sampling is presented in equation 100100.

Lout(n,v,Θ)1NiNf(li,v,Θ)p(li,v,Θ)L(li)nli(100)(100)Lout(n,v,Θ)≡1N∑iNf(li,v,Θ)p(li,v,Θ)L⊥(li)⟨n⋅li⟩

In equation 100100pp is the probaility density function (PDF) of the BRDF ff, and lili represents the important direction samples with that BRDF. These samples depend on vv and αα. The definition of the PDF and its Jacobian (the transform from hh to ll) is shown in equation 101101.

p(l,v,Θ)=D(h,α)nhJ(h)J(h)=14vh(101)(101)p(l,v,Θ)=D(h,α)⟨n⋅h⟩J(h)J(h)=14⟨v⋅h⟩

Choosing important directions

Refer to section 8.2 for more details. Given a uniform distribution (ζϕ,ζθ)(ζϕ,ζθ) the important direction ll is defined by equation 102102.

ϕ=2πζϕθ=cos11ζθ(α21)ζθ+1−−−−−−−−−−−−√l={cosϕsinθ,sinϕsinθ,cosθ}(102)(102)ϕ=2πζϕθ=cos−11−ζθ(α2−1)ζθ+1l={cosϕsinθ,sinϕsinθ,cosθ}

Typically, (ζϕ,ζθ)(ζϕ,ζθ) are chosen usign the Hammersely uniform distribution algorightm described in section 8.3.

Pre-filtered importance sampling

Importance sampling considers only the PDF to generate important directions; in particular its oblivious to the actual content of the IBL. If the later contains high frequencies in areas without a lot of samples, the integration won’t be accurate. This can be somewhat mitigated by using a technique called pre-filtered importance sampling, in addition this allows the integral to converge with much less samples.

Pre-filtered importance sampling uses several images of the environment increasingly low-pass filtered. This is typically implemented very efficiently with mipmaps and a box filter. The LOD is selected based on the sample importance, that is, low probability samples use a higher LOD index (more filtered).

This technique is described in details in [Krivanek08].

The cubemap LOD is determined in the following way:

lodKΩsΩp=log4(KΩsΩp)=4.0=1Np(li)4π6widthheightlod=log4(KΩsΩp)K=4.0Ωs=1N⋅p(li)Ωp≈4π6⋅width⋅height

Where KK is a constant determined empirically, pp the PDF of the BRDF, ΩsΩs the solid angle associated to the sample and ΩpΩp the solid angle associated with the texel in the cubemap.

Cubemap sampling is done using seamless trilinear filtering. It is extremely important to sample the cubemap correctly across faces using OpenGL’s seamless sampling feature or any other technique that avoids/reduces seams.

Table 17 shows a comparison between importance sampling and pre-filtered importance sampling when applied to figure 79.

Figure 79: Importance sampling image reference
Samples Importance sampling Pre-filtered importance sampling
4096
1024
32
Table 17: Importance sampling vs pre-filtered importance sampling with α=0.4α=0.4

The reference renderer used in the comparison below performs no approximation. In particular, it does not assume v=nv=n and does not perform the split sum approximation. The pre-filtered renderer uses all the techniques discussed in this section: pre-filtered cubemaps, the analytic formulation of the DFG term, and of course the split sum approximation.

Left: reference renderer, right: pre-filtered importance sampling.