Gears of Washroom – Pt 6

Last post was all about materials, this time around I’ll be talking rendering settings and lighting.

Rendering choices

Being one of my first renders in Houdini, I made lots of mistakes and probably made lots of poor decisions in the final render.

I experimented a bit with Renderman in Houdini, but after taking quite some time to get it set up properly, enable all the not so obvious settings for subdivision, etc, I decided this probably wasn’t the project for it.

I ended up using Mantra Physically Based Rendering, and chose to render at 1080p, 48 fps. Well… I actually rendered at 60 fps, but realized that I didn’t like the timing very much when I’d finished, and 48 fps looked better.
This is something I should have caught a lot earlier ūüôā

Scene Lighting

I wanted two light sources: an area light in the roof, and the explosion itself.
Both of these I just dumped in from the shelf tools.

The explosion lighting is generated from a Volume Light, which I pointed at my Pyro sim.

I was having quite a lot of flickering from the volume light, though.
I suspected that when the volume was too close to the walls, and probably a bit too rough.

To solve this, I messed around with the volume a bit before using it at a light source:

VolumeLight

So I import the sim, drop the resolution, blur it, then fade it out near the wall with a wrangle:

VolumeTrim

For the sake of the gif, I split the wall fading and density drop into separate steps, but I’m doing both those things at once in the Wrangle:

@density -= 0.12;
float scale = fit(@P.x, -75, -40, 0, .2);
@density *= scale;

So between a X value of -75 (just in front of the wall) to -40, I’m fading the volume density up from 0 to 0.2.

After that, I had no issues with flickering, and the volume lighting looked the way I wanted it!

VolumeLightFrame.png

Render time!

I think that’s it all covered!

Some stats, in case you’re interested:

  • Explosion fluid and pyro took 3 hours to sim
  • Close up bubbling fluid took about 1 hour to sim
  • Miscellaneous other RBD sims, caches, etc, about 2 hours
  • 184gb of simulation and cache data for the scene
  • Frame render times between 10 – 25 minutes each.
  • Full animation took about 154 hours to render.
    Plus probably another 40-50 hours of mistakes.
  • 12gb of rendered frames

My PC is a i7-5930k with a NVidia GeForce 970.

Hopefully I’ve covered everything that people might be interested in, but if there’s anything I’ve glossed over, feel free to ask questions in the comments ūüôā

Advertisements

Gears of Washroom – Pt 5

Last post I went through all the setup for the bubble sim, now for lighting, rendering, materials, fun stuff!

Scene materials

I talked about the texture creation in the first post, but there are also quite a lot of materials in the scene that are just procedural Houdini PBR materials.

Materials.png

Most of these are not very exciting, they are either straight out of the material palette, or they are only modified a little from those samples.

The top four are a little more interesting, though (purplePaint, whiteWalls, wood and floorTiles), because they have some material effects that are driven from the simulation data in the scene.

If you squint, you might notice that the walls and wood shelf get wet after the grenades explode, and there are scorch marks left on the walls as well.

Here is a shot with the smoke turned off, to make these effects obvious:

WetAndScorched.png

Scorch setup

To create the scorch marks in a material, I first needed some volume data to feed it.
I could read the current temperature of the simulation, but that dissipates over a few frames, so the scorch marks would also disappear.

The solution I came up with was to generate a new low resolution volume that keeps track of the maximum value of temperature per voxel, over the life of the simulation.

PyroMaxTemp

To start out with, I import the temperature field from the full Pyro sim, here is a visualization of that from about 2/3rds the way through the sim:

FullSimSmoke

I only need the back half of that, and I’m happy for it to be low resolution, so I resample and blur it:

SimplifiedSmoke

Great! That’s one frame of temperature data, but I want the maximum temperature that we’ve had in each voxel so far.

The easiest way I could think of doing this was using a solver, and merging the current frame volume with the volume from the previous frame, using a volume merge set to “Maximum”:

VolumeMaxSolver

And the result I get from this:

SimplifiedSmokeMax

So that’s the accumulated max temperature of the volume from the current frames, and all the frames before it!

Scorch in material

Back in the whiteWalls material, I need to read in this volume data, and use it to create the scorch mark.

Here is an overview of the white walls material:

whiteWallsMaterial.png

Both the wetness and scorch effects are only modifying two parameters: Roughness and Base Colour. Both effects darken the base colour of the material, but the scorch makes the material more rough and the wetness less rough.

For example, the material has a roughness of 0.55 when not modified, 0.92 when scorched and 0.043 when fully wet.

The burnScorch subnet over on the left exposes a few different outputs, these are all just different types of noises that get blended together. I probably could have just output one value, and kept the Scorch network box in the above screenshot a lot simpler.

Anyway, diving in to the burnScorch subnet:

BurnScorchSubnet.png
(Click for larger image)

One thing I should mention straight up: You’ll notice that the filename for the volume sample is exposed as a subnet input. I was getting errors if I didn’t do that, not entirely sure why!

The position attribute in the Material context is not in world space, so you’ll notice I’m doing a Transform on it, which transforms from “Current” to “World”.
If you don’t do that, and just use the volume sample straight up, you’ll have noise that crawls across the scene as the camera moves.
I found that out the hard way, 10 hours of rendering later.

Anyway, I’m sampling the maximum temperature volume that I saved out previous, and fitting it into a few different value ranges, then feeding those values into the Position and in one case Frequency of some turbulence noise nodes.

The frequency one is interesting, because it was totally a mistake, but it gave me a cool swirly pattern:

SwirlyNoise.png

When combined with all the other noise, I really liked the swirls, so it was a happy accident ūüôā

That’s really it for the scorch marks! Just messing about with different noise combinations until I liked the look.

I made it work for the white walls first, then copied it in to the purple walls and wood materials.

Wetness setup

Similar concept to what I did for the temperature, I wanted to work out which surfaces had come in contact with water, and save out that information for use in the material.

WetnessSetup

On the left side, I import the scene geometry, and scatter points on it (density didn’t matter to me too much, because I’m breaking up the data with noise in the material anyway):

WetnessPoints

The points are coloured black.

On the right side, I import the fluid, and colour the points white:

WetnessPointsSim

Then I transfer the colour from the fluid points onto the scatter points, and that gives me the points in the current frame that are wet!

As before, I’m using a solver to get the wetness from the previous frame, and max it with the current frame.

WrangleWetness

In this case, I’m doing it just on the red channel, because it means wetness from the current frame is white, and from the previous accumulated frames is red.
It just makes it nice to visualize:

WetnessSolver

I delete all the points that are black, and then cache out the remaining points, ready to use in the material!

Wetness in material

I showed the high level material with the wetness before, here is the internals of the subnet_wetness:

subnet_wetness.png
(Click for larger image)

So I’m opening the wetness point file, finding all points around the current shading point (which has been transformed into world space, like before).
For all wetness points that are within a radius of 7 centimetres, I get the distance between the wetness point and the current shading point, and use that to weight the red channel of the colour of that point.
I average this for all the points that were in the search radius.

In the loop, you’ll notice I’m adding up a count variable, but I worked out later that I could have used Point Cloud Num Found instead of doing my own count. Oh well ūüôā

I take the sampled wetness, and feed it into a noise node, and then I’m basically done!

If you want an idea of what the point sampled wetness looks like before feeding it through noise, here is what it looks like if I bypass the noise and feed it straight into baseColour for the white walls (white is wet, black is dry):

WetnessPointSample.png

Next up, Mantra rendering setup and lighting, should be a rather short post to wrap up with ūüôā

Gears of Washroom – Pt 4

Last post was all about Pyro and FLIP preparation for the explosion.
I’m going to go off on a bit of a tangent, and talk about the bubbling fluid sim at the start of the animation!

16.5 to the rescue

Houdini 16.5 came out just as I was starting to build this scene, and they added air incompressibility to FLIP.

I’d been doing some experiments trying to make bubbling mud in a separate scene:

FlippedOut

As you can probably tell, I didn’t have a great deal of luck ūüėČ

Fluid spawning setup

BubbleFluid

The network starts with importing the glass sections of the grenades.

There are three outputs: The geometry to spawn fluid from, a volume used for collision and volumes for fluid sinks.

The fluid spawn and collision is pretty straightforward, I’m using the Peak SOP to shrink the glass down, then the Fluid Source SOP to output an SDF for the collision:

CollisionVolume.png

^ The collision volume is pretty ūüôā

The sink volumes are a little more interesting.
The Copy SOP is copying onto some points I’ve spawned.
The points are scattered on the bottom part of the glass:

CopyTemplate

The geometry that is getting copied onto those points is a sphere that scales up and down over a certain frame range.

In the Copy SOP I’m stamping the copy id for each point, and the sphere is scaled using an attribute wrangle:

float lifeSpan = 220;
float maxScale = 0.9;
float minScale = 0.4;
float randParticle = @copyID+(rand(@copyID)*.3);
float bubbleLife = 0.06;

float zeroToOneTime = (@Frame%lifeSpan)/lifeSpan;
float distanceFromTimePoint = abs(zeroToOneTime - randParticle);
float dftpZeroOne = fit(distanceFromTimePoint, 0, bubbleLife, 0, 1);

float scale = cos(dftpZeroOne * ($PI/2));
if (scale > 0) scale += minScale;
@P *= scale * maxScale;

This ended up being far more complicated than it needed to be, I was randomly adding to it until I got something I liked ūüôā

I could strip most of the lifespan stuff out entirely: I was originally using that so that each point could spawn spheres multiple times, but that ended up being too aggressive.

Anyway, for each point, there is a range of frames within the lifespan where the sphere is scaled up and down.

With a low lifespan, this is what I’m getting:

Bubbles

The spheres get converted to a sink volume, and is used to allow fluid to escape the sim.
Where the fluid escapes, bubbles are created!

This is another case where I used the shelf tools for sink volumes in another scene, had a look at what it produced, then recreated it here.
I really recommend doing that sort of thing, it can really help with iteration time when your scenes get complicated!

Fluid sim and trim

BubbleFlip

The flip solver is pretty straightforward!

The collision volumes are imported as source volumes, and passed in to velocity and post solve sourcing (again, I worked this setup out with shelf tools).

Air incompressibility is just a checkbox option:

BubbleFlipAir

That’s it for the sim, on to the surfacing network:

fluidCloseSurface

I had some issues when solving on lower settings, where I’d get a lot of fluid leaking out of the container.
To solve this, I’m importing the glass into this network, then converting to a VDB:

TrimVDB

Right under that, I use an attribute wrangle to copy “surface” from the VDB to the points, and I use that surface value to delete any outside points.

Here’s a mockup example (since my final sim only had very minor leaking), where I’ve manually moved a bunch of points outside the glass, and I’m colouring them with the surface value:

VDBCull

Now time to surface the fluid.

Usually I would just use the Particle Fluid Surface SOP to do this, but I tried a number of approaches with it, and I was always either losing the internal bubbles, or not getting the desired results, so I built the surface with some of the nodes that the Particle Fluid Surface SOP uses internally.

First of all, VDB from Particles, here is a cutaway of that:

VDBFromParticles

The surface is pretty rough, though!
I smooth it out with a VDB Smooth SDF:

VDBSmooth1

I didn’t want to smooth out the interior any further, but the exterior is still too rough.
Similar to how I was deleting points before, I use the glass to create another VDB, and I use that as the mask for a second VDB smooth:

VDBSmooth2

And that result, I was happy with!

You might have noticed by now that I’ve only been simulating one grenade in the fluid bubble sim.
I figured that since I was using heavy depth of field, I could get away with using the same fluid sim for all of the grenades, and just copy it onto each grenade:

CopyStampSim.png

To make the duplication less obvious, I simulated the fluid sim a bit over 60 frames more than I needed, and I used a time shift on the 2nd and 3rd grenade to offset the simulation, so the same bubbles aren’t coming up at exactly the same time.

The copy node stamps the GrenadeID, which the timeshift node uses:

TimeShiftStamp

And now we have bubbling grenade water sims!

RemeshedOffsetSims.png

(Re-meshed in this shot, because the render mesh is 1 million polygons :))

Still to come in this series of posts: Mantra rendering setup, lighting, materials, fun with scorching and wetting walls!

Gears of Washroom – Pt 3

Last post I talked about some of the RBD prep for the grenade explosions, now time to talk about Pyro and Flip prep!

Every Sim in its right place

All of the simulations (RBD, Pyro and FLIP) live in the same DOP network:

AutoDop_fluid
(Click on image to expand)

The network itself isn’t terribly exciting, it was mostly built using by testing out shelf tools in other files, then I’d reconstruct them to try and understand them a bit better. I might dive into it a little later in this blog series.

Now, on to the networks that feed this beast!

Explosion fuel and force

FuelAndForce
(Click on image to expand)

In this network, I’m importing the grenade geometry and using it as a basis for generating fuel for the Pyro simulation, and also forces that affect the FLIP and RBD sims.
Sorry about the sloppy naming and layout of this one, it’s not as readable as it should be.

Splodey time!

Early in the grenades network, I created a primitive attribute called “splodeyTime”, set about 10 frames apart for each grenade.
This attribute drives the timing of all of the explosion related effects.

For example, I use it to set the active time for the RBD pieces in an attribute wrangle:

if (@Frame > (@splodeyTime-2)) i@active = 1;

You can also see it in the first loop in my Explosion Fuel and Force network:

FuelAndForce_foreach1

Here, I’m doing some similar promotion tricks like in the last post:

  • Iterating over pieces (in this case each piece is a grenade base)
  • Promoting splodeyTime up to a detail attribute.
  • Creating a single new point in the centre of the current grenade, using the Add SOP:
    AddPoint
  • Deleting all geometry except for that point.
  • Also delete the point, if:
    • Current frame number < (splodeyTime – 2)
    • Current frame number > (splodeyTime + 2)

So at the end of the foreach, each grenade will be represented by a single point, that will only exist for 4 frames around the splodeyTime of that grenade.

Great metaballs of fire!

The first thing I do with those points, is generate a cigar of metaballs:

MetaballCigar
I’m using this rather dubious looking thing to break constraints on the RBD (in the Remove Broken SOP solver in the DOP network diagram at the top of the post).

There are two more branches that come off the metaball cigar.

MetaballAndPoints

The left branch just inflates each of the metaballs, it is fed into a Magnet Force, and is only used on the RBD objects, to blow the grenade fragments apart.

The right branch is very similar, except after a slightly more aggressive inflation, I’m generating points from the volume of the metaballs, and adding a velocity attribute to them.

The points, that you can see in the image, are used to drive force in the water that explodes out of the grenades, in the FLIP fluid.

One thing I wanted to do here was to have fluid that is near the wall shoot out pretty straight, but then add a bit more randomness when the fluid is near the chain end of the grenade.

Not the best way to visualize it, but these are the force vectors for that effect:

FluidForceVectors

To start out, I set the velocity attribute to {10,0,0} in an attribute wrangle.

Then, the randomness is achieved in the creatively named attribvop1 in the right branch a few images back, and here is what that looks like:

ForceVectorVOP.png

Stepping through this network a bit…
I’m creating a new noisey velocity, and mixing that with the existing straight down X velocity.

I always want the noisey velocity to be positive in X, but to be much stronger in Y and Z, hence the splitting out of the X component and the abs after the bottom noise.

The mix node uses the X position of the point to work out how much of the straight velocity to use versus the noisey velocity.
So between 0 – 5 in X, I’m blending the two. When the position is greater than 5, we have 100% of the noisey vector.

I messed about with lots of different vector / noise patterns, but I liked the movement this gave my fluid, so I stuck with this ūüôā

Fuel

Backing up a bit, there’s one more branch of this network, and it creates the fuel for my Pyro simulation:

FuelNetwork

I’m copying a sphere onto the explosion points, and then using¬†a fluid source SOP to create a volume for fuel, and using the built in noise settings to get some variation.

The sphere itself is getting scaled by splodeyTime, using copy stamping in uniform scale:

fit((stamp(“../copy2/”, “splodeyTime”, 1) – $F), -2, 2, 4, 10)

I also move it forward each frame using an attribute wrangle:

@P.x += ((@Frame+4) – @splodeyTime)*2;

This is what the fuel looks like over the few frames it is generated:

Fuel

Flipping glass

One last thing for simulation prep, I’m using the glass sections of the grenade as the fluid source of the exploding Flip sim.

Again, this uses the splodeyTime to delete the glass when it is outside of the frame range, so I’m creating the surface volume and particles just for a few frames:

FlipParticlesAndSurface

That’s pretty much each for simulation prep, at least for the main explosion!

Still to come, the bubbling water at the start, and all things materials and rendering.

Gears of Washroom – Pt 2

Last post I talked about the static scene setup for the above video, this post focuses on the RBD preparation and setup for the explosion.

It’s pretty much Houdini from this post onwards, here is the top level layout of my scene:

SceneView

Complete with a “TODO” section on the left that I forgot to clear out ūüėČ

The grenades nodes in the middle are where most of the prep work for the RBD is done, so the internals of those will be the focus for this post.

Import once, fracture twice

Although I probably should have just manually rotated and placed the chains, I decided to do an RBD sim to hang the chains in place, and over the wooden shelf.

GrenadesSimmed

I couldn’t get Concave collision to work with the chains.
They would sim ok for a handful of frames, and then generally start intersecting, or explode apart, or do any number of annoying things.

I had a chat to one of our VFX Artists at work, Mandy Morland, who does all sorts of cool things in Houdini.
She suggested I could fracture the chains, and then simulate the chain links as clusters of Convex Hulls, which was a super awesome idea.

As long as each new fracture piece in a single chain link has the same value in the name attribute, it can be assembled and passed to DOPs as a single object.

PresimFracturing_Chain

So, for example, all of the highlighted sections above would have the same name attribute value “piece2”.

However, the name attribute is created before the fracture, and the fracture creates new internal faces which will not have the correct name attribute!

ChainNewFaces

The above image visualises this problem with colour as a primitive attribute.
The pieces have a green colour applied to them before fracture, and after fracture the internal pieces can’t know what colour to have, so they get initialized to default (I’ve then coloured them blue to make it more obvious).

To avoid this issue, I’m looping over chain pieces, promoting the piece attribute to Detail, then fracturing the chain, then promoting the piece attribute back to Primitive:

PresimFracturing
(The piece attribute is just temp storage for Name).

This is a Houdini trick I do a lot of, in general form:

  • Loop over geometry pieces that I want to share an attribute value
  • At the start of the loop, promote an attribute to Detail.
    Only the current piece of geometry will be affected by this, Detail is essentially per piece at this point.
  • Do the things I want to do in the loop that generate new geometry, or invalidate the attribute in some way I don’t want
  • Promote the attribute back from Detail to whatever context it was before (primitive, point, etc)

Side note on fracturing in a loop…

Fracturing inside a loop isn’t the best idea, it’s going to be slower than running fracture once on a single geometry object.

When fracturing geometry that represents combined objects, and that has per-object attributes you want to keep (eg: a string attribute called “chainID”), a better way might be:

  • Fracture the geometry (new faces will have a blank chainID).
  • Use Connectivity on Primitives to get a attribute (let’s call it Bob)
  • Do a foreach loop on Bob:
    • Promote chainID to Detail (using “maximum”)
    • Promote chainID back to Primitive.

Very similar, but at least it gets the fracture out of the loop.

In the end, node network performance wasn’t too big an issue for me: I go crazy with cache nodes (184 GB for this project), so I only had to wait for it to finish once, and it was still pretty quick for such simple geometry.

Hanging out

With that done, the chains were ready for the hanging sim, I just needed to make sure the bullet sim was creating convex hulls for connected prims:

ChainsConvex

ChainSettle

Back in my Grenades_fractured network, I’m importing the base un-fractured geometry, and then importing the hanging simulation with a dopimport node, with an Import Style set to “Transform Input Geometry”, and using timeshift to pick a frame I like.

So now I have the base un-fractured geometry in a good starting pose:

PosedGrenades

With the grenades now in the right pose, I fracture them again, ready to be blown up!
They don’t get made active straight away, but more on that, explosions and fluids in the next blog post ūüôā

Gears of Washroom – Pt 1

I wanted to do a project that focuses a bit more on simulation work in Houdini, and also rendering, and this is the result. Probably one of the sillier things I’ve done in a while, I suppose ūüôā

Originally, I was going to render it in Renderman, but I settled on using Mantra (which I’m really starting to love).
Renderman in Houdini was just a little too much to deal with, on top of all the other things I was learning.

Modelling

All of the models in the scene are edge weighted sub-d, modelled in Modo.
The water grenade is Gears Of War inspired, cobbled together from whatever concept I could find off the interwebz ūüôā

In a previous post, I showed off the water grenade model when I was talking about transferring edge weighted sub-d models from Modo to Houdini.

After posting it on this great Modo / Houdini thread, Pascal Beeckmans (aka PaQ WaK) informed me that Alembic files keep edge weight values.
Using Alembic files is a much better idea than the silly workaround hack I was doing!

Toilet paper dispenser model

Water grenade sub-d model in Modo

Toilet model

The whole base geometry for the scene (room objects + one grenade) comes in under 20000 triangles, such is the joy of sub-d in Modo.

Grenade materials

The material for the grenade was made in Substance Painter:

Grenade - Substance Painter texturing

GrenadeSubstance_Close

I tried out a few different colour schemes, but settled on eye searing orange.

Something I hadn’t used much in Substance Painter is the “Hard Surface” stamping feature.
This is a really cool way of adding little details that I couldn’t be bothered modelling:

Hard surface stamps on Grenade model in Substance Painter

Substance Painter comes with quite a few to choose from:

GrenadeSubstance_HardSurfaceStamps

I can imagine that if you built up a library of them, you could detail up models super quick!

Designer fun

I decided to do the walls, floor and wood shelf materials in Substance Designer.

Tiles, wood and plaster wall materials

I won’t go through all the networks node by node, but I’ll do a bit of an overview of the Wood Substance, since it’s the slightly more interesting of the three.

Wood Substance

Substance Designer network for wood material
Click for larger view

I’m taking an anisotropic noise, warping it with a crystal noise, taking a creased noise, warping the rest with that, making some gaussian spots, converting them to a normal, vector warping everything with that.

That gives me a greyscale image, that I gradient map to make a diffuse texture.
In case the graph, and that last sentence weren’t confusing enough, here it is in gif form!

WoodTexture

I always find it a bit to talk through Substance Designer networks, because so much of it is fiddling around until you have something you like.
I could probably remake this a lot better, and remove more than half of the nodes!

One really fun part of this was the Gradient ramp right at the end.

In the Gradient node, you can click and drag over anything on your screen (google image search images, in my case) to pick a gradient.
Here’s a great video explaining it:

I ran the picker over a photo of a wood plank that I liked, and then manually cleaned up the points on the gradient a bit:

WoodGradient.png

Setting the scene

Having exported the scene from Modo as Alembic, I’m loading all the parts of the scene separately, and creating Groups for them.

AlembicImport

I just noticed that under the Attributes tab in the Alembic node, there is the option to “Add Path Attribute”, so using that for grouping would be the smarter and neater way go!

The UV layout node in the middle was just me messing around with some of the new packing features in 16.5.
I’d UV’d in Modo already, but I wanted to see how the layout node fills in holes:

H16_5_Unwrap

Turns out it’s pretty great!

The last section of this network, I’m setting up the length of the chain by copying and rotating the one chain piece, and offsetting the end handle part:

GrenadeChainLength

To keep the handle at the end of the chain with a transform node, I can just reference the transform and number of copies properties from the copy1 node.

So the x translation is:

ch(“../copy1/ncy”) * ch(“../copy1/tx”)

And to get the handle at the right 90 degree rotation:

(ch(“../copy1/ncy”) % 2) * 90

It’s nothing exciting, but it’s great how easy it is just to dump expressions into parameters just about anywhere in Houdini.

For the room geometry, the import setup is very similar to the grenade setup.
One thing probably worth pointing out: I’m subdividing the assets at render time.
So although they are not subdivided in the viewport, you’ll just have to trust me that all the edge weighting came in fine ūüôā

FullSceneSetup.png

In the next blog post, I’ll start getting into some of the simulation setup.
From here on, the focus of these posts on this project will be 100% on Houdini.

Subsurface Scattering spherical harmonics – pt 3

Welcome to part 3 of this exciting series on how to beat a dead horse.

By the time I got to the end of the work for the last post, I was just about ready to put this project to bed (and by that, I mean P4 obliterate…).

There was just one thing I wanted to fix: The fact that I couldn’t rotate my models!
If I rotate the object, the lighting rotates with it.

Spaaaaaaace

To fix the rotating issue, in the UE4 lighting pass, I need to transform the light vector into the same space that I’m storing the SH data (object space, for example).

RotateSpace

To do that, I need to pass through at least two of those object orientation vectors to the lighting pass (for example, the forward and right vectors of the object).

So, that’s another 6 floats (if I don’t compress them) that I need to pass through, and if you remember from last time, I’d pushed the limits of MRTs with my 16 spherical harmonics coefficients, I don’t have any space left!

This forced me to do one of the other changes I talked about: Use 3 band Spherical Harmonics for my depth values instead of 4 band.
That reduces the coefficients from 16 to 9, and gives me room for my vectors.

<Insert montage of programming and swearing here>

3bandSH

So yay, now I have 3 band SH, and room for sending more things through to lighting.

Quality didn’t really change much, either, and it helped drop down to 5 uv channels, which became very important a little later…

Going off on a tangent

I figured that since I was solving the problem for object orientation, maybe I could also do something for deforming objects too?
For an object where the depth from one side to the other doesn’t change much when it’s deforming, it should be ok to have baked SH data.

The most obvious way to handle that was to calculate and store the SH depth in Tangent space, similar to how Normal maps are usually stored for games.

I wanted to use the same tangent space that UE4 uses, and although Houdini 15 didn’t have anything native for generating that, there is a plugin!

https://github.com/teared/mikktspace-for-houdini

With that compiled and installed, I could plonk down a Compute Tangents node, and now I have Tangents and Binormals stored on each vertex, yay!

At this point, I create a matrix from the Tangent, Binormal and Normal, and store the transpose of that matrix.
Multiplying a vector against it will give me that vector in Tangent space. I got super lazy, and did this in a vertex wrangle:

matrix3 @worldToTangentSpaceMatrix;
vector UE4Tang;
vector UE4Binormal;
vector UE4Normal;

// Tangent U and V are in houdini coords
UE4Tang         = swizzle(v@tangentu, 0,2,1);
UE4Binormal     = swizzle(v@tangentv, 0,2,1);
UE4Normal       = swizzle(@N, 0,2,1);

@worldToTangentSpaceMatrix = transpose(set(UE4Tang, UE4Binormal, UE4Normal));

The swizzle stuff is just swapping Y and Z (coordinate systems are different between UE4 and Houdini).

Viewing the Tangent space data

To make debugging easier, at this point I made a fun little debug node that displays Tangents, Binormals and Normals the same as the model viewer in UE4.

It runs per vertex, and creates new coloured line primitives:

TangentFace

Haven’t bothered cleaning it up much, but hopefully you get the idea:

TangentPrimsVOP.png

And the vectorToPrim subnet:

VectorToPrimsVOP.png

So, add a point, add some length along the input vector and add another point, create a prim, create two verts from the points, set the colour.
I love how easy it is to do this sort of thing in Houdini ūüôā

The next step was to modify the existing depth baking code.

For each vertex in the model, I was sending rays out through the model, and storing the depth when they hit the other side.
That mostly stays the same, except that when storing the rays in the SH coefficients, I need to convert them to tangent space first!

HitsToSH.png

Getting animated

Since most of the point of a Tangent space approach was to show a deforming object not looking horrible, I needed an animated model.

I was going to do a bunch of animation in Modo for this, but I realized that transferring all my Houdini custom data to Modo, and then out to fbx might not be such a great idea.

Time for amazing Houdini animation learningz!!
Here’s a beautiful test that any animator would be proud of, rigged in Houdini and dumped out to UE4:

StupidTube.gif

So, I spent some time re-rigging the Vortigaunt in Houdini, and doing some more fairly horrible animation that you can see at the top of this post.

RiggedVort.png

Although the results aren’t great, I found this weirdly soothing.
Perhaps because it gave me a break from trying to debug shaders.

At some point in the future, I would like to do a bit more animation/rigging/skinning.
Then I can have all the animators at work laugh at my crappy art, in addition to all the other artists…

Data out

Hurrah, per-vertex Tangent space Spherical Harmonic depth data now stored on my animated model!

This was about the part where I realized I couldn’t find a way to get the Tangents and Binormals from the Houdini mesh into Unreal…

When exporting with my custom data, what ends up in the fbx is something like this:

   UserDataArray:  {
    UserDataType: "Float"
    UserDataName: "tangentu_x"
    UserData: *37416 {...

When I import that into UE4, it doesn’t know what that custom data is supposed to be.

If I export a mesh out of Modo, though, UE4 imports the Tangents and Binormals fine.
So I jumped over into Modo, and exported out a model with Tangents and Binormals, and had a look at the fbx.
This showed me I needed something more like this:

LayerElementTangent: 0 {
 Version: 102
 Name: "Texture"  
 MappingInformationType: "ByPolygonVertex"
 ReferenceInformationType: "Direct"
 Tangents: *112248 {...
This is probably around about when I should have set the project on fire, and found something better to do with my time but…

C# to the rescue!!

I wrote an incredibly silly little WPF program that reads in a fbx, changes tangentu and tangentv user data into the correct layer elements.

Why WPF you ask?
Seriously, what’s with all the questions? What is this, the Spanish inquisition?
Real answer: Almost any time I’ve written any bit of code for myself in the past 7 years, it’s always a WPF program.
80% of them end up looking like this:
AmazingUI
The code is horrible, I won’t paste it all, but I build a list of all the vectors then pass them through to a function that re-assembles the text and spits it out:
        public string CreateLayerElementBlock(List<Vector3D> pVectors, string pTypeName)
        {
            string newBlock = "";

            int numVectors  = pVectors.Count;
            int numFloats   = pVectors.Count * 3;

            newBlock += "\t\tLayerElement" + pTypeName + ": 0 {\n";
            newBlock += "\t\t\tVersion: 102\n";
            newBlock += "\t\t\tName: \"Texture\"\n";
            newBlock += "\t\t\tMappingInformationType: \"ByPolygonVertex\"\n";
            newBlock += "\t\t\tReferenceInformationType: \"Direct\"\n";
            newBlock += "\t\t\t" + pTypeName + "s: *" + numFloats + " {\n";
            newBlock += "\t\t\t\ta: ";
	...

Gross. Vomit. That’s an afternoon of my life I’ll never get back.
But hey, it worked, so moving on…

UE4 changes

There weren’t many big changes on the UE4 side, just the switching over to 3 band SH, mostly.

One really fun thing bit me in the arse, though.
I’d been testing everything out on my static mesh version of the model.
When I imported the rigged model, I needed to change the material to support it:
UseWithSkeletal
And then the material failed to compile (and UE4 kept crashing)…
So, apparently, skinned meshes use a bunch of the UV coordinate slots for… Stuff!
I needed to switch back to my old approach of storing 6 coefficients in TexCoord1,2 and 3, and the remaining three SH coeffs in vertex colour RGB:
RiggedMatChanges.png
Cropped this down to exclude all the messy stuff I left in for texture based SH data, but those three Appends on the right feed into the material pins I added for SH data in the previous posts.
And yeah, there’s some redundancy in the math at the bottom too, but if you don’t tell anyone, I won’t.

Shader changes

Now to pass the Tangent and Binormal through to the lighting pass.

I ended up compressing these, using Octahedron normal vector encoding, just so I could save a few floats.
The functions to do this ship with UE4, and they allow me to pass 2 floats per vector, rather than x,y,z, and the artifacts are not too bad.
Here’s some more information on how it works:
OctahedronEncoding.png
So now the Tangent and Binormal data is going through to the lighting pass, and I transform the light to tangent space before looking up the SH data:
 float3x3 TangentToWorld =
 {
  GBuffer.WorldTangent,
  GBuffer.WorldBinormal,
  cross(GBuffer.WorldTangent, GBuffer.WorldBinormal),
 };

 float3 TangentL = mul(L, transpose(TangentToWorld));

 float DepthFromPixelToLight  = saturate(GetSH(SHCoeffs, TangentL));
Probably could do that transposing in BassPassPixelShader I guess, and save paying for it on every pixel for every light, but then there’s a lot of things I probably could do. Treat my fellow human beings nicer, drink less beer, not stress myself out with silly home programming projects like this…

Conclusion

If I were to ever do this for real, on an actual game, I’d probably build the SH generation into the import process, or perhaps when doing stuff like baking lighting or generating distance fields in UE4.

If you happened to have a bunch of gbuffer bandwidth (i.e, you had to add gbuffers for something else), and you have a lot of semi translucent things, and engineering time to burn, and no better ideas, I suppose there could be a use for it.
Maybe.