Since I discovered that Nuke was concatenating Transform nodes for me, I liked to break out my transformations into multiple nodes to give me a bit more flexibility. I would do a stabilize, some keyframe animation, scale up a bit, rotate for a while, matchmove, re-scale, in different nodes. Nuke was smart about out, and because I knew how to not break concatenation, quality was never an issue. However, one cold dark afternoon in late 2013, I have been asked to provide a SINGLE transform node to the CG department (The had a way to add it to their 3D Camera). My move was built from a dozen node, I needed a way to reduce that to one.

### The Bad Way

Giving up on the idea to calculate all manually, I set off to write a Python script to calculate a merged transform node for me, using the math that I knew: Trigonometry.

This resulted in an extremely heavy script, with some ridiculous calculations being done. Here is little snippet of just the math:

# Let's do our math: # I rotate a point (I arbitrarily chose the top right corner: width, height) around the A center point, by the rotation value of A. # The math to rotate a point is: # newX = cos(rotation) * (position.x - center.x) - sin(rotation) * (position.y - center.y) + center.x # newY = sin(rotation) * (position.x - center.x) + cos(rotation) * (position.y - center.y) + center.y # I also then multiply this by my scale coord_rotA_cenA_x = ( cos( radians( rotateA )) * ( width - centerA_x ) - sin( radians( rotateA )) * ( height - centerA_y ) + centerA_x ) * scaleA coord_rotA_cenA_y = ( sin( radians( rotateA )) * ( width - centerA_x ) + cos( radians( rotateA )) * ( height - centerA_y ) + centerA_y ) * scaleA # I rotate the same point (so again width/height) but this time around the center point of the second node. coord_rotA_cenB_x = ( cos( radians( rotateA )) * ( width - centerB_x ) - sin( radians( rotateA )) * ( height - centerB_y ) + centerB_x ) * scaleA coord_rotA_cenB_y = ( sin( radians( rotateA )) * ( width - centerB_x ) + cos( radians( rotateA )) * ( height - centerB_y ) + centerB_y ) * scaleA # now let's find the difference between these two (we want to cancel that difference via an offset later on) diff_coordA_x = coord_rotA_cenA_x - coord_rotA_cenB_x diff_coordA_y = coord_rotA_cenA_y - coord_rotA_cenB_y # the offset needed to match the transformations of A is the difference between our 2 rotations + the center * (scale-1) + translateA offsetA_x = diff_coordA_x + ( ( centerB_x - centerA_x ) * ( scaleA - 1 ) ) + translateA_x offsetA_y = diff_coordA_y + ( ( centerB_y - centerA_y ) * ( scaleA - 1 ) ) + translateA_y # Let's do the same with the transform values of B, but this time instead of using height and width, we need to use the new coordinates of that point: coord_rotB_cenB_x = ( cos( radians( rotateB )) * ( coord_rotA_cenA_x - centerB_x ) - sin( radians( rotateB )) * ( coord_rotA_cenA_y - centerB_y ) + centerB_x ) * scaleB coord_rotB_cenB_y = ( sin( radians( rotateB )) * ( coord_rotA_cenA_x - centerB_x ) + cos( radians( rotateB )) * ( coord_rotA_cenA_y - centerB_y ) + centerB_y ) * scaleB # Let's offset out center: centerN_x = centerB_x + offsetA_x centerN_y = centerB_y + offsetA_y # Same thing, but offseting our offset center coord_rotB_cenN_x = ( cos( radians( rotateB )) * ( coord_rotA_cenA_x - centerN_x ) - sin( radians( rotateB )) * ( coord_rotA_cenA_y - centerN_y ) + centerN_x ) * scaleB coord_rotB_cenN_y = ( sin( radians( rotateB )) * ( coord_rotA_cenA_x - centerN_x ) + cos( radians( rotateB )) * ( coord_rotA_cenA_y - centerN_y ) + centerN_y ) * scaleB # difference between rotation center B and N diff_coordB_x = coord_rotB_cenB_x - coord_rotB_cenN_x diff_coordB_y = coord_rotB_cenB_y - coord_rotB_cenN_y # B offset offsetB_x = diff_coordB_x + ( offsetA_x * ( scaleB - 1 ) ) + translateB_x offsetB_y = diff_coordB_y + ( offsetA_y * ( scaleB - 1 ) ) + translateB_y # Now that we have calculated our variables, it's time to assign everything to the new Transform: # First calculate the actual values: new_translate_x = offsetA_x + offsetB_x new_translate_y = offsetA_y + offsetB_y new_rotate = rotateA + rotateB new_scale = scaleA * scaleB

I don’t know if it’s clear to you, but it’s not to me. Every time I re-read this code, I get a headache for a few minutes until I remember how it worked.

The full script was working, but flawed. It was quite slow to execute, would frequently crash Nuke when trying to execute on too many transforms at once, and would only work with translation, rotation and uniform scale.

It did the job though, and I surprised myself every few weeks since that day by having to use it again and again in different situations. The new problems arose when coworkers started to want to use that script. It had never been meant to be released, and even even running it required a bit a python typing for the frame range.

### The Better Way

Finally, a few weeks ago, I had enough. That script deserved a proper release. I went back to the drawing board, and decided to re-implement the script using a different method: Matrices.

I didn’t know how to use matrices in 2013, but having completed Khan Academy’s Matrix course a couple of months ago, I felt ready.

With matrices, the math above was greatly reduced. Here is the actual math in the new function, achieving the same as the previous snippet:

current_matrix = transform_matrix * current_matrix

Now that I can wrap my head around.

Using matrices had other advantages, like the fact that Nuke’s Transform node creates the matrix for you, you can access it directly in Python.

It also had challenges:

- I wanted to output a Transform node, not a Matrix, but there is no way to specify a transform’s matrix directly.
- I wanted to support CornerPin nodes, but while these calculate a matrix internally, they do not expose it via python. (Please join me in my feature request to the foundry)
- Transforms do not support perspective (cornerpin) transformations, so I needed a way to output a new CornerPin if necessary.

The first one was the hardest part. While trying to go back from a Matrix to Transform, Rotate, Scale, Skew, Center parameters, I found out that multiple combinations of these parameters could result in the same matrix, and that technically, rotation is a combination of scale and skew. I ended up finding what I needed, which is called a QR decomposition. The Wikipedia page of it seems very complete, but I couldn’t personally comprehend it sufficiently to make code out of it. However, as always, the Web is filled with people smarter than me, and this time I found a post by Frederic Wang which gave me the math a little bit more clearly. His algorithm was giving me almost the right result, and I had to do some trial and error changes to it to get a 1:1 matching result. I can now calculate every possible case, using Translate, Rotate, ScaleX, ScaleY, SkewX. The result might have different values in some fields than what you would get by calculating manually, but it’s all accurate. I’m basically solving an equation there, and to do so I had too many unknowns, so I manually set SkewY to 0.

The second point I thought would be easier (and it was) but proved a bit more challenging than expected. I had a solution in mind which didn’t work. I ended up recycling code from Ivan Busquets and Magno Borgo (who himself was adapting from Ivan, Pete O’Connell, and Myself). All these would only calculate the matrix from the 4 corners though, and ignore the extra matrix, or the inverse checkbox, so I added these. I tried for a bit to support the option to disable some corners but gave up since I never personally used these functions, and it was more work than I was willing to do.

For the third point, I could have easily stuffed my result matrix into the CornerPin’s extra_matrix (which is an option in my script) but I wanted to result in a proper editable cornerPin, so I made a function that projects the 4 corners through my result matrix and puts that in the ‘to’ knobs. (Oh, did I mention that the transform’s matrix knob is a **Transform2d_Knob**, while the CornerPin’s transform_matrix knob is an** IArray_Knob**, and that these guys work in very different ways? Well done The Foundry, well done..)

The rest was some cleaning up, making a little user interface for the functions, printing some cool little matrices while the code is running, adding progress bars, etc..

To download the full final script: Nukepedia

AkelNovember 15, 2017 at 12:02 pmInteresting !! But it’s a really good habit to separate transforms.

Erwan LeroyNovember 15, 2017 at 1:03 pmSure, it’s great to separate transforms, but it also becomes annoying if you need to invert a whole group of transforms, or if you want to use the result of all the transforms directly in a roto node, or other node that has its own transform knobs.

danielJanuary 3, 2018 at 7:09 pmYes this is very handy when a track has to be offset or when two trackers are used to lock down a stabalize and then we need to invert or retime that data somehow. Inverting and or retiming multiple transforms is a pain and error prone.

GrahamSeptember 14, 2018 at 12:37 amSo I downloaded this in the hope of merging my transforms while forgetting a key detail… they’re transformgeo’s… any way of doing that? It’s one transform with animation and the other is just a position offset transform (i think)

Erwan LeroySeptember 14, 2018 at 12:55 amThe same method can pretty much be applied as the transform Geo and the axis both have matrices as well. Nuke’s matrix4 object actually has methods for transforming back into translate rotate scale as well so it’s even easier. There are some examples on the nuke forum for that if you search for exporting cameras.