Raytracing in Unity's built-in render pipeline

Raytracing in Unity's built-in render pipeline

Raytraced Reflections and Shadows

Unity 2019 added an API for DX12's RTX stuff. Nice, finally!

Oh wait.. guess I'll come back to this later.

It's later! Unity 2020.1 now has skinned mesh renderer support! Time to play around a bit.


What's New?

There's a few new juicy additions to Unity's scripting API.

Getting Started

If you are starting from scratch, the very first thing you need to do is to generate a RayTracingAccelerationStructure. Here's a simple example to get started.

var settings = new RayTracingAccelerationStructure.RASSettings();
settings.layerMask = UpdateLayers;
settings.managementMode = RayTracingAccelerationStructure.ManagementMode.Automatic;
settings.rayTracingModeMask = RayTracingAccelerationStructure.RayTracingModeMask.Everything;

_AccelerationStructure = new RayTracingAccelerationStructure(settings);
maybe somewhere in a manager's OnEnable(), for example
_AccelerationStructure.Update();
_AccelerationStructure.Build(); 
in some manager's Update(), for example

Once this has been setup, you're ready to generate rays!

Generating Rays

RTX rays are generated from a ray generation shader. If you've used Compute Shaders in Unity, things will look pretty familiar to you.

#include "HLSLSupport.cginc"
#include "UnityRaytracingMeshUtils.cginc"
#include "UnityShaderVariables.cginc"

RaytracingAccelerationStructure _RaytracingAccelerationStructure : register(t0);

float4x4 _CameraToWorld;
float4x4 _InverseProjection;

[shader("raygeneration")]
void MyRayGenerationShader()
{
    uint3 dispatchId = DispatchRaysIndex();
    uint3 dispatchDim = DispatchRaysDimensions();

    // somehow get the world space position of the pixel we care about
    // somehow generate a ray from that pixel in a meaningful Direction
    // color the pixel we care about, with the information that ray gathered
}

For the sake of example, we should create something meaningful. Let's dispatch rays from every pixel on the screen, which we will eventually get intersecting with geometry in the acceleration structure that we defined earlier.

Also, note that the RaytracingAccelerationStructure needs to be defined somewhere, too. t0 is the register that Unity uses for this structure.

float2 texcoord = (dispatchId.xy + float2(0.5, 0.5)) / float2(dispatchDim.x, dispatchDim.y);
assuming we dispatch our generation shader such that this kernel is ran for every pixel on the screen, we'll want to do some math to convert that to a 0 to 1 range.
float3 viewPosition = float3(texcoord * 2.0 - float2(1.0, 1.0), 0.0);
after that, we can convert it to be in clip space
float4 clip = float4(viewPosition.xyz, 1.0);
float4 viewPos = mul(_InverseProjection, clip);
next is converting from our clip space, to camera space
viewPos.xyz /= viewPos.w;
don't forget that perspective divide 🔪
float3 worldPos = mul(_CameraToWorld, viewPos);
float3 worldDirection = worldPos - _WorldSpaceCameraPos;
and finally, we can get our world position (and world direction)!

Now that we know from where and to where we want to cast rays, let's start building a ray and it's payload. In the raytracing API provided to us, we need to build both a RayDesc and a custom payload struct, which are passed into a TraceRay() call.

RayDesc ray;
ray.Origin = _WorldSpaceCameraPos; 
ray.Direction = worldDirection; 
ray.TMin = 0;
ray.TMax = 10000;
RayDesc is an intrinsic structure to the RTX API

You can use any payload that you wish. Keep in mind that you may run into issues with high memory footprint payloads, so try to keep them as small as possible. In this example, we'll just use color.

struct MyPayload
{
	float4 color;
};
define this somewhere
MyPayload payload;
payload.color = float4(0, 0, 0, 0);
TraceRay(_RaytracingAccelerationStructure, 0, 0xFFFFFFF, 0, 1, 0, ray, payload);

TraceRay() is another magic call we can do with the raytracing API. I highly recommend clicking on the previous link, to read up on some of the finer details. At the least, you'll need an acceleration structure to pass in, some flags for intersection rules, a 32 bit LayerMask flag, the RayDesc, and the custom payload.

After you've traced a ray, you need to do something with the payload result.

RWTexture2D<float4> RenderTarget;
define a RenderTexture to write to, outside of this kernel
RenderTarget[dispatchId.xy] = payload.color;
write the color into the render texture. assumming that you have dispatched an array of rays equal to the width and height of your input render texture, you can just use the dispatchId as the position in your texture to write into.

This traced ray may or may not intersect with geometry. If it does not, you'll probably want to provide a "miss" kernel.

[shader("miss")]
void MyMissShader(inout MyPayload payload : SV_RayPayload)
{
    payload.color = 0;
}
a miss shader could look something like this

This miss kernel would be called if the ray does not hit anything, as the name implies.

Keep note of that inout keyword. For our purposes, you can assume it works similarly to the ref keyword in c# (although its a bit different. inout actually does a copy in and a copy out, its not a reference being passed around).

On the c# side, you'll need some component which can reference the ray generation shader, and then execute it.

 // define a RT, to see whats going on 
 // note, if its public, 
 //   you can double click it in the inspector to preview it,
 //   when the game view is focused 
 public RenderTexture _RenderTexture;
 
 
need to define a render texture to write to
public RayTracingShader MyRayGenerationShader;
need to reference the raytracing shader somehow

I recommend using a CommandBuffer to dispatch the ray generation shader, but you can also just do it from the shader itself.

command.SetRayTracingTextureParam(MyRayGenerationShader, "RenderTarget", _RenderTexture); 
don't forget to assign the render texture you want to write to
command.SetRayTracingShaderPass(MyRayGenerationShader, "MyRaytracingPass");
command.SetRayTracingAccelerationStructure(MyRayGenerationShader, "_RaytracingAccelerationStructure", _AccelerationStructure);
take note of "MyRaytracingPass", going to talk about it soon
var projection = GL.GetGPUProjectionMatrix(camera.projectionMatrix, false);
var inverseProjection = projection.inverse;

command.SetRayTracingMatrixParam(MyRayGenerationShader, "_InverseProjection", inverseProjection);
command.SetRayTracingMatrixParam(MyRayGenerationShader, "_CameraToWorld", camera.cameraToWorldMatrix);
command.SetRayTracingVectorParam(MyRayGenerationShader, "_WorldSpaceCameraPos", camera.transform.position);
you'll need to setup those variables for the ray generation shader to actually calculate the stuff it needs
command.DispatchRays(MyRayGenerationShader, "MyRayGenerationShader", (uint)_texture.width, (uint)_texture.height, 1u, camera);
the string passed in matches the raygeneration kernel name, in the RaytracingShader

Once this dispatch is called, you'll be generating rays! However, they won't do anything when they hit anything, yet.

Hit!

To do anything interesting, you'll need to provide a hit kernel. In Unity, you can do this in the .shader file of any shader by adding a Pass{} block. When you dispatch rays, you can use SetRayTracingShaderPass to specify a name, and when a ray intersects an object, Unity will check for any Passes that match that name.

As an example, let's create a new shader.

Open it up, and add in something like this, after the SubShader block.

SubShader
{
    Pass
    {
        Name "MyRaytraceShaderPass"

        HLSLPROGRAM

        #pragma raytracing MyHitShader

        struct MyPayload
        {
            float4 color; 
        };
        
        struct AttributeData
        {
            float2 barycentrics; 
        };

        [shader("closesthit")]
        void MyHitShader(inout MyPayload payload : SV_RayPayload,
          AttributeData attributes : SV_IntersectionAttributes)
        {
            payload.color = 1;
        }

        ENDHLSL
    }
}
a very simple hit shader

HLSL has some a new attribute: [shader("")]. You can define what type of raytracing shader this function is meant for. The most useful for us is "closesthit" - which is ran when the ray intersection nearest to the ray's origin is found.

Here is a list of available DXR shader types, with their appropriate method signatures.

[shader("raygeneration)]
void RayGeneration() {} 

[shader("intersection")]
void Intersection() {}

[shader("miss")]
void Miss(inout MyPayload payload : SV_RayPayload) {}

[shader("closesthit")]
void ClosestHit(inout MyPayload payload : SV_RayPayload, MyAttributes attributes : SV_IntersectionAttributes) {}

[shader("anyhit")]
void AnyHit(inout MyPayload payload : SV_RayPayload, MyAttributes attributes : SV_IntersectionAttributes) {}
SV_RayPayload and SV_IntersectionAttributes are both user defined structs. 

What will happen now, is our ray generation shader will generate black for any pixels that are not overlapping geometry, and white for any pixels that are generating geometry. We're raytracing!

VERY advanced raytracing going on here

What can we do with this?

The raytracing API that DX12 is providing us unlocks some great stuff.

Here's a screenshot of some raytraced reflections I've thrown together.

the "closesthit" kernel is using TraceRay() to keep searching for more intersections, to keep adding on layers of reflections. rays are jiggled around to simulate roughness of materials. 

This can be taken a step further, too. Why not have shadows in our reflections?

I've added an additional ray which sets a flag in the payload to identify as a shadow ray check, to darken areas where rays do not run the miss kernel

And if our reflections can have raytraced shadows.. why not replace unity's shadows completely with raytraced shadows?

rays are cast from the world position of the object the pixel is drawing, towards the light. if the ray misses, its white, if it hits, its black. casting multiple rays in semi-random directions can be used to increase the quality of this effect. this is generating a shadow map, which I'm using to replace unity's shadowmap
Here's one more example, it's a bit more real-world. I've been working on implementing raytraced effects into my side project.

Getting Useful Data From Intersections

struct AttributeData
{
    float2 barycentrics; 
};

Using a user defined struct, you can request some data from intersections. It's possible to use these barycentric coordinates provided to get useful interpolated data from the vertices. For example, you could use this to get the albedo of the triangle with proper uv mapping, or you could even shade the triangle like normal.

#include "UnityRaytracingMeshUtils"
Unity has recently added some helper functions to their built-in shaders, which makes this pretty trivial to do. Be sure to include it!
struct Vertex
{
    float2 texcoord;
};
Let's make a new struct, for getting some vertex data.
uint primitiveIndex = PrimitiveIndex();
this is a new intrinsic function for hlsl, to fetch a ray's intersected primitive data
uint3 triangleIndicies = UnityRayTracingFetchTriangleIndices(primitiveIndex);
UnityRayTracingFetchTriangleIndices is a new unity built-in function, borrowed from that include earlier
Vertex v0, v1, v2;
to get interpolated data, we're going to need the three vertices that make up the triangle our ray intersected
v0.texcoord = UnityRayTracingFetchVertexAttribute2(triangleIndicies.x, kVertexAttributeTexCoord0);

v1.texcoord = UnityRayTracingFetchVertexAttribute2(triangleIndicies.y, kVertexAttributeTexCoord0);

v2.texcoord = UnityRayTracingFetchVertexAttribute2(triangleIndicies.z, kVertexAttributeTexCoord0);
UnityRayTracingFetchVertexAttribute[0..4] are helper functions from unity, which we can use to get vertex attribute data given the data from earlier
float3 barycentrics = float3(1.0 - attributeData.barycentrics.x - attributeData.barycentrics.y, attributeData.barycentrics.x, attributeData.barycentrics.y);
its time to use those barycentric coordinates!
Vertex vInterpolated;

vInterpolated.texcoord = v0.texcoord * barycentrics.x + v1.texcoord * barycentrics.y + v2.texcoord * barycentrics.z;

From here, you can sample from textures in the normal way.

Texture2D<float4> _MainTex;
SamplerState sampler_MainTex;

[shader("closesthit")]
void MyHitShader(inout ExamplePayload payload : SV_RayPayload, AttributeData attributes : SV_IntersectionAttributes)
{
    // get texcoord somehow
    // then use below 
    
    payload.color = _MainTex.SampleLevel(sampler_MainTex, texcoord, 0);
}
using the texture coordinates
rays returning albedos

Some useful data! You can expand this further by using TraceRay() inside of the closesthit shader passes. Keep in mind that you need to increase #pragma max_recursion_depth 1, when using TraceRay() inside of a hit function. You'll also need to manage your recursion manually. Keep this in mind, or you may crash!


Self Promo

If you just want to download some source code, I'm providing it all in the form of a Unity Asset Store package. The raytraced reflections and raytraced shadows are both packaged with it, of course. I've also included a lot of helpful macros, for quickly making your own raytracing shaders.

Corgi Raytracing - Built-in Render Pipeline (Forward and Deferred!) | VFX Shaders | Unity Asset Store
Add depth to your next project with Corgi Raytracing - Built-in Render Pipeline (Forward and Deferred!) from Wandering Corgi. Find this & more VFX Shaders on the Unity Asset Store.
my asset includes all of it's source code, so it is easily modifiable

Further Reading

I highly recommend the following for further reading.

Show Comments