Alright, this is one that a couple of people have done for other programs that use an upwards Y-axis (Maya, SoftImage etc.), but this works for an upwards Z-axis for programs such as 3ds max, Blender etc.

The extra feature this offers is that it prevents camera rotations from flipping over, eg, instead of going from 359 to 1, it will now go from 359 to 361. This makes a significant difference if you plan on using the camera for motion blur, as otherwise you will get a frame of extreme rotational blur. So, that’s what it does, let’s see how it works.

#### Installing

Download from here.

To install, simply drop the file in your plugin path, and add the following lines to your menu.py

import createExrCam # Adds the create exr cam script to the toolbar and adds a shortcut nuke.menu( 'Nuke' ).addCommand( 'MatteHue/Create Camera from EXR', 'createExrCam.createExrCam()', "ctrl+alt+c")

You can change the menu name from MatteHue to whatever name you’d prefer, but the convenience of adding it as a command is that you can easily create the camera by selecting a node and pressing the hotkey, ctrl-alt-c.

#### Code Breakdown

The code is commented and broken into sections.

The setup information first is pretty straightforward, it looks for the necessary information to set up the camera. Note that in some cases, metadata naming might not be the exact same as this script expects, so if you’re having issues, check if the naming in your exrs is the same by selecting the Read node, pasting the following code into your script editor and running it.

node = nuke.selectedNode() for k, v in node.metadata().iteritems(): print k, v

The only information we absolutely must have to create a camera, is Aperture and Transform.

The second section simply gets the framerange of the file, and asks the user to confirm how much they want to be used for the camera.

The third section simply creates the camera and sets the necessary knobs.

Now it gets interesting. We loop over the frame range so that we can set the transformation matrix of the camera. For those who don’t know, this is a Matrix4, which means a list of 16 numbers commonly displayed in a 4×4 grid. They refer to the translation, rotation and scale of the object (though not in any recognizable order to those who don’t use them often!). On each iteration of the loop, we’ll simply check if the user has cancelled the calculation (nuke.ProgressTask handles all this nicely without us needing to do any custom dialogs), and if they have, we’ll keep or delete the camera as the user wishes.

# Get horizontal aperture and FOV, calculate focal val = node.metadata( 'exr/cameraAperture', frame) fov = node.metadata( 'exr/cameraFov', frame) focal = val / (2 * math.tan(math.radians(fov)/2.0)) cam['focal'].setValueAt(float(focal),frame) cam['haperture'].setValueAt(float(val),frame)

So, how do we figure out our camera information from just the metadata? We need to know the focal length, which sadly isn’t in the metadata. However, the field of view (fov) and aperture can be used to calculate the focal length. I won’t pretend I did the math on this, you can find the formula for how to calculate it online and it works great! Then we just set the focal length and the horizontal aperture (For those who aren’t too familiar with cameras, we just need the horizontal aperture as the vertical will be handled by the aspect ratio of the camera).

Next comes the hard part. We pull the camera matrix for the current frame and plug it into a nuke Matrix4. The translation information is nice and straightforward, we can take that out and set it correctly later. The rotations are what will cause us some problems. This is because rotations in matrices are stored as Quaternions, not Euler angles (Which are the handy 0-360° we all know and love). They’re much faster for computers to work with, but a little trickier for us to understand… and by a little, I mean you’ll almost certainly never look at a quaternion and know what way it’s looking. To get around this, we’ll use some built in functions to try and re-orient it the way we need.

#flip rotation axis matrixCreated.rotateX(math.radians(90)) matrixCreated.rotationOnly() invMatrix = matrixCreated.inverse() rotate = invMatrix.rotationsZXY() eulerRot = [float(math.degrees(rotate[0])), 180.0 - float(math.degrees(rotate[2])), 180.0 - float(math.degrees(rotate[1]))]

First, we rotate the matrix 90° around the x-axis (For the unfamiliar, we convert 90° to radians, which are again a more computer friendly way of handling angles). matrixCreated.rotationOnly keeps only the parts of the matrix relevant to rotations and strips the rest. We can then invert it and convert it back to radians. However, do note that when we convert it to degrees, we’re actually swapping the y and z values, as well as inverting them by deducting them from 180°. This gives us our correct euler angles that we can put back into the camera matrix.

**Rotation Correction**

However, this is where we add the extra feature, the rotational correction. This is because the metadata might not store the rotations in the format we’d expect, such as using 359° and -1° interchangeably. It’s the same position, but if calculating motion blur, the camera will think it has done a full rotation and cause a full radial blur. To combat this, we store the previous frames rotations and compare them to the current ones in the following way.

(math.ceil(lastRot[i]/360.0) - math.ceil(eulerRot[i]/360.0))

Divide each rotation by 360 and ditch the remainder to find what range it’s in, eg, 0.00001 – 360 = 1, 360.00001 – 720 = 2, etc… Then deduct the current frame from the last to see if there’s any difference, ie, since the last frame, has the rotation crossed into a new range. The difference is clamped between -1 and 1 so that it won’t flip more than one full rotation. If there is a difference, multiply it by 360° and add it to the current rotation (if the difference is negative, it will obviously deduct it). Now we can check if it’s closer to the last frame by seeing if it’s less than 180° away. If it is, set this as the current rotation.

temp = eulerRot[i] + 360 * difference if abs(lastRot[i] - temp) < 180: eulerRot[i] = temp

Alternatively, it could still be in the same range, but a greater jump than 180°, eg, going from 359 to 1. Really, this should be 359 to 361, so to counter that, we simply deduct one from the other, and if the jump is greater than 180°, we add on the difference.

Finally, we set all the matrix values for translate and rotate. Note that here we simply swap the z and y, and invert the old y (now z) axis to get the correct result. Much easier than playing with rotations! Update the progress of the camera creation and voila, one camera creation script.

From here, it’s actually easier to backtrack and make the camera creation work for an upwards Y-axis as you can disregard all the awkward inversions we needed to make. Feel free to attempt it yourselves, but I’ll be updating the script in the near future to allow users to pick which they’d prefer. Til then, happy nuking!

Hi Matthew, thank you for awesome script, it has been a real time saver for me for a long time.

I had to stop using it though after alleged changes in the way V-Ray writes exr’s now.

I am using 2ds Max 2018 and V-Ray 3.6/Next (problem is same in both versions). There is something wrong with camera rotation, position though works flawlessly. Please let me know if you could take a look at the example exr sequence and compare exr camera with one I exported from 3ds Max using Duber’s script (NukeOps).

Thank you in advance!

LikeLike