At last, something exciting! This is a tool that admittedly, might not have a huge amount of use, but does have some fun features to play with. I’ve developed a set of nodes for generating, filling, lighting and rendering voxel grids in Nuke. You may also notice a new link above to a documentation page for the new nodes, as it includes quite a lot and could be confusing to use without a bit of an explanation.
The tool is available for download here on nukepedia.
A quick summary of the included nodes is:
- V_Grid: Generates a voxel grid.
- V_Noise: Fills a grid with 3D noise.
- V_Shape: Places an SDF volumetric shape into the grid.
- V_Erode: Shrinks or Expands the volume.
- V_Average: Averages the density / colour data in the grid.
- V_Transform: Transforms the grid or the data.
- V_PtLight: A point light for lighting the volume.
- V_EnvLight: A light for mapping a latlong image around the grid, similar to IBL lighting.
- V_Preview: A version of nuke’s positionToPoints for viewing the voxels in 3D space.
- V_Render: Renders the voxel grid using a given camera.
So, time for a quick run through of how it works:
Grids are started with the V_Grid node, where you can specify a bounding box and resolution for your voxels. What this does is generate a square image that has one pixel for each voxel, with its RGB is set to its XYZ position. It stores it as a separate layer so that the main rgba channels can be used for the colour and density data to fill the grid. However, the bounding box and resolution data is important to interpreting the grid information, so to pass it down to further nodes we inject it into the metadata.
Now, whenever we need to calculate a voxel, we can easily query the position in space from the voxels layer, or reverse engineer the 2D position by using the resolution / bounding box data.
It’s important to note though that although there is a transform node, it doesn’t change the pixels in the voxels layer. The reason for this, is that most calculations are easiest to do if the box is axis aligned, and we know exactly what it’s bounds are. So how does the transform work? Instead it injects a transformation matrix into the metadata which we then invert and apply to any lights / cameras used instead. It may seem more complicated, but it is actually far more convenient and cost efficient.
To fill in the grid, there are two main tools: Noise and Shape. The first is pretty self explanatory, it fills the grid with 3D fractal noise. The second places a volumetric SDF (Signed Distance Field) into the grid. What this means, is that it places a shape in the grid, with a value of 0 on it’s surface, positive values increasing towards it’s center, and negative values decreasing outside the shape. Why negative values? Because by stacking noise and shapes, we can cause the noise to only appear in certain areas but without forcing a harsh edge. The shape node has separate control over the positive and negative values so it can be used for quite a lot of purposes.
A couple of tools were also included for applying simple changes to the data. Erode works just like the standard nuke erode tool, expanding or shrinking the volumes. Average, wait for it, averages! It looks at all the voxels in range and returns the average of the colour / density / both. However, it only calculates positive values to preserve volume, otherwise it would quickly dilute the edges. Transform can be used to manipulate the grid position as mentioned, or to move the data within the grid.
Two light types are currently included, though I intend to add a directional light soon (Because it’s such a simple change to make, it’s been constantly put on the long finger). Point lights are pretty standard, and very useful for creating some really nice lighting effects. The way it calculates is by firing a ray from the light to each voxel, and reducing it’s transmittance as it passes through fog, scattering it’s light and blending into a scatter colour
The environment light maps an image around the grid using latlong coordinates, and projects it’s colour into the grid based on the normals of the volume at each voxel. It similarly decreases it’s transmittance based on the absorption value. It can be used with HDRI images by enabling an option to use the luminance of the image as it’s intensity. Finally, values are averaged in an area as it gives a more accurate imitation of light bouncing.
While there is only one render node, there is also a preview node which is useful for viewing the voxels in their correct 3D position in the viewer. This uses the somewhat temperamental PositionToPoints node to place either the density, colour or position data into the scene.
The render node simply fires a ray from each pixel and accumulates data until it’s transmittance expires, similar to the lighting. It even allows for adjustment of the accumulation at render time with a simple slider. The higher the sampling used per pixel, the more accurate the result, and values can be pushed quite high before becoming too costly.
One useful thing to note is that if the density / colour data is not changing, the grid can be frame held so that rendering only has to worry about camera transforms, making it very quick to render.
And that’s all there is to it, a fairly straightforward method with some really nice control and results. Give it a play around and let me know what you think!