Factory – pt 4 – (Trimming the flowers)

April 10, 2016

Part 4 of https://geofflester.wordpress.com/2016/02/07/factory-pt-1/


Alpha card objects

In most games, you have some objects that have on/off alpha transparency, generally for objects that you wouldn’t model all the detail for (leaves, flowers, etc).


^ Exactly like that, beautiful isn’t it?
Years of art training not wasted at all…

Also referred to as punch-through / 1-bit / masked materials, btw.
So, you can see that the see-through portion of that polygon is pretty large.

When rendering these types of assets, you are still paying some of the cost for rendering all of those invisible pixels. If you are rendering a lot of these on screen, and they are all overlapping, that can lead to a lot of overdraw, so it’s fairly common to cut around the shape to reduce this, something like this:


What does this have to do with the factory?
Aren’t you supposed to be building a factory?

I get distracted easily…

I’m not really planning to have a lot of unique vegetation in my factory scene, but I am planning on generating a bunch of stuff out of Houdini.

When I create LODs, that will be in Houdini too, and the LODs will probably be alpha cards, or a combination of meshes and alpha cards.

When I get around to doing that, I probably don’t want to cut around the alpha manually, because… Well, because that sounds like a lot of work, and automating it sounds like a fun task 🙂

Houdini mesh cutting tool

The basic idea is to get my image plane, use voronoi fracture to split up the plane, delete any polygons that are completely see-through, export to UE4, dance a happy dance, etc.

For the sake of experiment, I want to try a bunch of different levels of accuracy with the cutting, so I can find a good balance between vertex count, and overdraw cost.

Here’s the results of running the tool with various levels of cutting:


Here’s what the network looks like, conveniently just low resolution enough so as to be totally no use… (Don’t worry, I’ll break it down :))


The first part is the voronoi fracture part:


I’m subdividing the input mesh (so that I end up with roughly a polygon per pixel), then use an Attribute VOP to copy the alpha values from the texture onto the mesh, then blur it a bunch:


I scatter points on that, using the alpha for density, then I join it with another scatter that is just even across the plane. This makes sure that there are enough cuts outside the shape, and I don’t get weird pointy polygons on the outside of the shape.

Here is an example where I’ve deliberately set the even spread points quite low, so you can see the difference in polygon density around the edges of the shape vs inside the shape:


Counting up the alpha

So, earlier, I mentioned that I subdivided up the input mesh and copied the alpha onto it?
I’ll call this the pixelated alpha mesh, and here’s what that looks like:



Next, I created a sub network that takes the pixelated alpha mesh, pushes it out along its normals (which in this case, is just up), ray casts it back to the voronoi mesh, and then counts up how many “hits” there are on each voronoi polygon.

Then we can just delete any polygon that has any “hits”.

Here is that network:


After the ray sop, each point in the pixelated alpha mesh has a “hitprim”, which will be set to the primitive id that it hit in the voronoi mesh.

I’m using an Attribute SOP to write into a integer array detail attribute on the voronoi mesh for each “hitprim” on the pixelated alpha mesh points, and here’s that code:

int success = 0;
int primId = pointattrib(0, "hitprim", @ptnum, success);

int primhits[] = detail(0, "primhits");

if (primId >= 0)
setcomp(primhits, 1, primId);
setdetailattrib(0, "primhits", primhits, "add");

After all that stuff, I dump a “remesh” node, which cheapens up the mesh a lot.

And back to UE4…

So, with all the above networks packaged into a Digital Asset, I could play with the parameters (the two scatter values), and try out a few different levels of cutting detail, as I showed before:


I’m creating a rather exaggerated setup in UE4 with an excessive amount of overdraw, just for the purposes of this blog post.

For now, I’ve made the alpha cards huuuuuuuuge, and placed them where I used to have the flowers in my scene:


Then, all I need to do is swap in each different version of my alpha card, and then GPU profile!


The camera shot, without any alpha plane cutting optimization, took about 12 ms.

Test1, which is 27 vertices, seemed to be the best optimization. This came in at about 10.2 ms, so a saving of more than 1.5 ms, which is pretty great!

I was actually expecting Test2 to be the cheapest, since it chops quite a bit more off the shape, and at 84 vertices I didn’t think the extra vertex cost would even register on a GTX 970. Turns out I was wrong, Test2 was marginally more expensive!

This just goes to show, never trust someone about optimization unless they’ve profiled something 😛

Test3, at 291 vertices, costs about another 0.3 ms.



Of course, the savings are all quite exaggerated, but in a “real world” scenario I would probably expect to have a lot more instances, all a lot smaller. In which case, going with the lower vertex count mesh seems like it would still make sense (although I will, of course, re-profile when I have proper meshes).

Lots of more fun things to do with this: get it working on arbitrary meshes (mostly working), see if I can use Houdini engine and integrate it into UE4, etc.
Still not sure how much vegetation I’ll have in my factory scene, but I think this will still be useful 🙂




Factory – pt 3 (early optimisation is something something)

March 3, 2016

Part 3 of https://geofflester.wordpress.com/2016/02/07/factory-pt-1/

Optimizing art early, before you have a good sense of where the actual expense of rendering your scene is, can be a pretty bad idea.

So let’s do it!!


I’ll do it #procedurally.
Sort of.

20 gallons of evil per pixel

My ground shader is pretty expensive. It’s blending all sorts of things together, currently, and I still have things to add to it.

I don’t want to optimize the actual material yet, because it’s not done, but it looks like this and invokes shame:


As a side note here, this material network looks a bit like the Utah teapot, which is unintentionally awesome.

Every pixel on this material is calculating water and dirt blending.

But many of those pixels have no water or dirt on them:


So why pay the cost for all of that blending across the whole ground plane?
What can I do about it?

Probably use something like the built in UE4 terrain, you fool

Is probably what you were thinking.
I’m sure that UE4 does some nice optimization for areas of terrain that are using differing numbers of layers, etc.

So you’ve caught me out: The technique I’m going to show off here, I also want to use on the walls of my factory, I just haven’t built that content yet, and I thought the ground plane would be fun to test on 🙂

Back to basics

First up, I want to see exactly how much all of the fancy blending is costing.

So I made a version of the material that doesn’t do the water or the dirt, ran the level and profiled them side by side:


^ Simple version of my material vs the water and dirt blending one.


So, you can see above that the material that has no dirt/water blending is 1.6 milliseconds cheaper.

Now, if I can put that material on the areas that don’t need the blending, I can’t expect to get that full 1.6 milliseconds back, but I might get 1 millisecond back.

That might not sound like much, but for a 60 fps game, that’s about 1/16th of the entire scene time.

Every little bit helps, getting that time back from cutting content alone can take many hours 🙂

Splitting the mesh

To put my cheap material onto the non-blending sections, I’ll split the mesh around the areas where the vertex colour masks have a value of 0.

Luckily, the ground plane is subdivided quite highly so that it plays nice with UE4 tessellation and my vertex painting, so I don’t need to do anything fancy with the mesh.

Back to Houdini we go!


So, anything that has > 0 sum vertex colour is being lifted up in this shot, just to make it obvious where the mesh split is happening.

Here’s the network:


The new nodes start at “Attribcreate”, etc.

The basic flow is:

  • “Colour value max” is set as max(@Cd.r, @Cd.g), per point, so it will be set to some value if either dirt or water are present.
  • Two new Max and Min attributes per polygon are created by promoting Colour Value max from Point –> Polygon, using Min and Max promotion methods (so if one vertex in the polygon has some dirt/water on it, the then max value will be non zero, etc)
  • The polygons are divided into three groups: Polygons that have no vertices with any blending, Polygons that have some blending, Polygons that have all verts that are 100% blending.
  • NOTE: For the purposes of this blog post, all I really care about is if the Polygon has no dirt/water or if it has some, but having the three groups described above will come in handy in a later blog post, you’ll just have to trust me 🙂
  • The two groups of polygons I care about get two different materials applied to them in Houdini.
    When I export them to UE4, they maintain the split, and I can apply my cheaper material.

So, re-exported, here it is:

Looks the same?

Great, mission successful! Or is it…

Checking the numbers

Back to the GPU profiler!


Ok, so the column on the right is with my two materials, the column in the middle is using the expensive material across the whole ground plane.

So my saving was a bit under one millisecond in this case.
For an hour or two of work that I can re-use in lots of places, I’m willing to call that a success 🙂

Getting more back

Before cleaning up my shader, there’s a few more areas I can/might expand this, and some notes on where I expect to get more savings:

  • I’ll have smaller blending areas on my final ground plane (less water and dirt) and also on my walls. So the savings will be higher.
  • I might mask out displacement using vertex colours, so that I’m not paying for displacement across all of my ground plane and walls.
    The walls for flat sections not on the corner of the building and/or more than a few metres from the ground can go without displacement, probably.
  • The centre of the water puddles is all water: I can create a third material that just does the water stuff, and split the mesh an extra time.
    This means that the blending part of the material will be just the edges of the puddles, saving quite a lot more.

So all in all, I expect I can claw back a few more milliseconds in some cases in the final scene.

One final note, the ground plane is now three draw calls instead of one.
And I don’t care.
So there. 🙂







Factory – pt 2 (magical placeholder land)

February 17, 2016

Part 2 of: https://geofflester.wordpress.com/2016/02/07/factory-pt-1/


I had to split this post up, so I want to get this out of the way:
You’re going to see a lot of ugly in the post. #Procedural #Placeholder ugly 🙂

This post is mostly about early pipeline setup in Houdini Python, and UE4 c++.

Placeholder plants

For testing purposes, I made 4 instances of #procedural plants using l-systems:


When I say “made”, I mean ripped from my Shangri-La tribute scene, just heavily modified:


Like I mention in that post, if you want to learn lots about Houdini, go buy tutorials from Rohan Dalvi.
He has some free ones you can have a run through, but the floating islands series is just fantastic, so just buy it 😛

These plants I exported as FBX, imported into UE4, and gave them a flat vertex colour material, ’cause I ain’t gonna bother with unwrapping placeholder stuff:


The placeholder meshes are 4000 triangles each.
Amusingly, when I first brought them in, I hadn’t bothered checking the density, and they were 80 000 + triangles, and the frame rate was at a horrible 25 fps 😛

Houdini –> UE4

So, the 4 unique plants are in UE4. Yay!

But, I want to place thousands of them. It would be smart to use the in-built vegetation tools in UE4, but my purpose behind this post is to find some nice generic ways to get placement data from Houdini to UE4, something that I’ve been planning to do in my old Half Life scene for ages.
So I’m going to used Instanced Static Meshes 🙂

Generating the placements

For now, I’ve gone with a very simple method of placing vegetation: around the edges of my puddles.
It will do for sake of example. So here’s the puddle and vegetation masks in Houdini (vegetation mask on the left, puddle mask on the right):


A couple of layers of noise, and a fit range applied to vertex colours.

I then just scatter a bunch of points on the mask on the left, and then copy flowers onto them, creating a range of random scales and rotations:


The node network for that looks like this:


Not shown here, off to the left, is all the flower setup stuff.
I’ll leave that alone for now, since I don’t know if I’ll be keeping any of that 🙂

The right hand side is the scattering, which can be summarized as:

  • Read ground plane
  • Subdivide and cache out a the super high poly plane
  • Move colour into Vertex data (because I use UVs in the next part, although I don’t really have to do it this way)
  • Read the brick texture as a mask (more on that below)
  • Move mask back to Point data
  • Scatter points on the mask
  • Add ID, Rotation and Scale data to each point
  • Flip YZ axis to match UE4 (could probably do this in Houdini prefs instead)
  • Python all the things out (more on that later)

Brick mask

I mentioned quickly that I read the brick mask as a texture in the above section.
I wanted the plants to mostly grow out of cracks, so I multiplied the mask by the inverted height of the bricks, clamped to a range, using a Point VOP:


And here’s the network, but I won’t explain that node for node, it’s just a bunch of clamps and fits which I eyeballed until it did what I wanted:


Python all the things out, huh?

Python and I have a special relationship.
It’s my favourite language to use when there aren’t other languages available.

Anyway… I’ve gone with dumping my instance data to XML.
More on that decision later.

Now for some horrible hackyness:

node = hou.pwd()
from lxml import etree as ET

geo = node.geometry()

root = ET.Element("ObjectInstances")

for point in geo.points():
pos         = point.position()
scale       = hou.Point.attribValue(point, 'Scale')
rotation    = hou.Point.attribValue(point, 'Rotation')
scatterID   = "Flower" + repr(hou.Point.attribValue(point, 'ScatterID')+1)

PosString       = repr(pos[0]) + ", " + repr(pos[1]) + ", " + repr(pos[2])
RotString       = repr(rotation)
ScaleString     = repr(scale) + ", " + repr(scale) + ", " + repr(scale)

ET.SubElement(root, scatterID,

# Do the export
tree = ET.ElementTree(root)
tree.write("D:/MyDocuments/Unreal Projects/Warehouse/Content/Scenes/HoudiniVegetationPlacement.xml", pretty_print=True)

NOTE: Not sure if it will post this way, but in Preview the tabbing seems to be screwed up, no matter what I do. Luckily, programming languages have block start and end syntax, so this would never be a prob… Oh. Python. Right.

Also, all hail the ugly hard coded path right at the end there 🙂
(Trust me, I’ll dump that into the interface for the node or something, would I lie to you?)

Very simply, this code exports an XML element for each Point.
I’m being very lazy for now, and only exporting Y rotation. I’ll probably fix that later.

This pumps out an XML file that looks like this:

<Flower1 Location=-236.48265075683594, -51.096923828125, -0.755022406578064 Rotation=(0.0, 230.97622680664062, 0.0) Scale=0.6577988862991333, 0.6577988862991333, 0.6577988862991333/>


Reading the XML in UE4

In the spirit of slapping things together, I decided to make a plugin that would read the XML file, and then add all the instances to my InstancedStaticMesh components.

First up, I put 4 StaticMeshActors in the scene, and in place I gave them an InstancedStaticMesh component. I could have done this in a Blueprint, but I try to keep Blueprints to a minimum if I don’t actually need them:


As stated, I’m a hack, so the StaticMeshActor needs to be named Flower<1..4>, because the code matches the name to what it finds in the XML.

The magic button

I should really implement my code as a either a specialized type of Data Table, or perhaps some sort of new thing called an XMLInstancedStaticMesh, or… Something else clever.

Instead, I made a Magic Button(tm):


XML Object Loader. Probably should have put a cat picture on that, in retrospect.

Brief overview of code

I’m not going to post the full code here for a bunch of reasons, including just that it is pretty unexciting, but the basic outline of it is:

  1. Click the button
  2. The plugin gets all InstancedStaticMeshComponents in the scene
  3. Get a list of all of the Parent Actors for those components, and their labels
  4. Process the XML file, and for each Element:
    • Check if the element matches a name found in step 3
    • If the Actor name hasn’t already been visited, clear the instances on the InstancedStaticMesh component, and mark it as visited
    • Get the position, rotation and scale from the XML element, and add a new instance to the InstancedStaticMesh with that data

And that’s it! I had a bit of messing around, with originally doing Euler –> Quaternion conversion in Houdini instead of C++, and also not realizing that the rotations were in radians, but all in all it only took a hour or two to throw together, in the current very hacky form 🙂

Some useful snippets

The FastXML library in UE4 is great, made life easy:


I just needed to create a new class inheriting from the IFastXmlCallback interface, and implement the Process<x> functions.

I’d create a new instance in ProcessElement, then fill in the actual data in ProcessAttribute.

Adding an instance to an InstancedStaticMeshComponent is as easy as:


And then, in shortened form, updating the instance data:

FTransform InstanceTransform;
_currentStaticMeshComp->GetInstanceTransform(_currentInstanceID, InstanceTransform);

// ...


_currentStaticMeshComp->UpdateInstanceTransform(_currentInstanceID, InstanceTransform);

One last dirty detail…

That’s about it for the code side of things.

One thing I didn’t mention earlier: In Houdini, I’m using the placement of the plants to generate out the dirt map mask so I can blend in details around their roots:


So when I export out my ground plane, I am putting the Puddles mask into the blue channel of the vertex colours, and the Dirt mask into the red channel of the vertex mask 🙂

Still to come (for vegetation)

So I need to:

  • Make the actual flowers I want
  • Make the roots/dirt/mossy texture that gets blended in under the plants
  • Build more stuff

Why.. O.o

Why not data tables

I’m all about XML.

But a sensible, less code-y way to do this would be to save all your instance data from Houdini into CSV format, bring it in to UE4 as a data table, then use a Construction Script in a blueprint to iterate over the data and add instances to an Instanced Static Mesh.

I like XML as a data format, so I decided it would be more fun to use XML.

Why not Houdini Engine

That’s a good question…

In short:

  • I want to explore similar workflows with Modo replicators at some point, and I should be able to re-use the c++/Python stuff for that
  • Who knows what other DCC tools I’ll want to export instances out of
  • It’s nice to jump into code every now and then. Keeps me honest.
  • I don’t own it currently, and I’ve spent my software budget on Houdini Indie and Modo 901 already 🙂

If you have any questions, feel free to dump them in the comments, I hurried through this one a little since it’s at a half way point without great results to show off yet!



Factory – pt 1

February 7, 2016

This blog post won’t mostly be about a factory, but if I title it this way, it might encourage me to finish something at home for a change 😉

My wife had a great idea that I should re-make some of my older art assets, so I’m going to have a crack at this one, that I made for Heroes Over Europe, 8 years ago:


I was quite happy with this, back in the day. I’d had quite a lot of misses with texturing on that project. The jump from 32*32 texture sheets on a PS2 flight game to 512*512 texture sets was something that took a lot of adjusting to.

I was pretty happy with the amount of detail I managed to squeeze out of a single 512 set for this guy, although I had to do some fairly creative unwrapping to make it happen, so it wasn’t a very optimal asset for rendering!

The plan

I want to make a UE4 scene set at the base of a similar building.
The main technical goal is to learn to use Substance Painter better, and to finally get back to doing some environment art.

Paving the way in Houdini

First up, I wanted to have a go at making a tiling brick material in Substance Painter.
I’ve used it a bit on and off, in game jams, etc, but haven’t had much chance to dig into it.

Now… This is where a sensible artist would jump into a tool like ZBrush, and throw together a tiling high poly mesh.

But, in order to score decently on Technical Director Buzz Word Bingo, I needed to be able to say the word Procedural at least a dozen more times this week, so…


I made bricks #Procedurally in Houdini, huzzah!

I was originally planning to use Substance Designer, which I’ve been playing around with on and off since Splinter Cell: Blacklist, but I didn’t want to take the time to learn it properly right now. The next plan was Modo replicators (which are awesome), but I ran into a few issues with displacement.

Making bricks

Here is the network for making my brick variations, and I’ll explain a few of the less obvious bits of it:


It a little lame, but my brick is a subdivided block with some noise on it:


I didn’t want to wait for ages for every brick to have unique noise, so the “UniqueBrickCopy” node creates 8 unique IDs, which are passed into my Noise Attribute VOP, and used to offset the position for two of the noise nodes I’m using on vertex position, as you can see bottom left here:


So that the repetition isn’t obvious, I randomly flip the Y and Z of the brick, so even if you get the same brick twice in a row, there’s less chance of a repeat (that’s what the random_y_180 and random_z_180 nodes are at the start of this section).

Under those flipping nodes, there are some other nodes for random rotations, scale and transform to give some variation.


Each position in my larger tiling pattern has a unique ID, so that I can apply the same ID to two different brick placements, and know that I’m going to have the exact same brick (to make sure it tiles when I bake it out).

You can see the unique IDs as the random colours in the first shot of the bricks back up near the top.

You might notice (if you squint) that the top two and bottom two rows, and left two and 2nd and 3rd from the right rows have matching random colours.

Placing the bricks in a pattern

There was a fair bit of manual back and forth to get this working, so it’s not very re-usable, but I created two offset grids, copied a brick onto each point of the grid, and played around with brick scale and grid offsets until the pattern worked.


So each grid creates an “orientation” attribute, which is what rotates the bricks for the alternating rows. I merge the points together, sort them along the X and Y axis (so the the vertex numbers go up across rows).

Now, the only interesting bit here is creating the unique instance ID I mentioned before.
Since I’ve sorted the vertices, I set the ID to be the vertex ID, but I want to make sure that the last two columns and the last two rows match with the first and last columns and rows.

This is where the two wrangle nodes come in: they just check if the vertex is in the last two columns, and if it is, set the ID to be back at the start of the row.

So then we have this (sorry, bit hard to read, but pretend that the point IDs on the right match those on the left):


And yes, in case you are wondering, this is a lot of effort for something that could be easier done in ZBrush.
I’m not in the habit of forcing things down slow procedural paths when there is no benefit in doing so, but in this case: kittens!
(I’ve got to break my own rules sometimes for the sake of fun at home :))

Painter time

Great, all of that ugly #Procedural(tm) stuff out of the way, now on to Substance Painter!


So I’ve brought in the high poly from Houdini, and baked it out onto a mesh, and this is my starting point.
I’m not going to break down everything I’ve done in Substance, but here are the layers:


All of the layers are #Procedural(tm), using the inbuilt masks and generators in Painter, which use the curvature, ambient occlusion and thickness maps that Painter generates from your high poly mesh.

The only layer that had any manual input was the black patches, because I manually picked a bunch of IDs from my Houdini ID texture bake, to get a nice distribution:


The only reason I picked so many manually is that Painter seems to have some issues with edge pixels in a Surface ID map, so I had to try and not pick edge bricks.
Otherwise, I could have picked a lot less, and ramped the tolerance up more.

You might notice that the material is a little dark. I still haven’t nailed getting my UE4 lighting setup to match with Substance, so that’s something I need to work on.
Luckily, it’s pretty easy to go back and lighten it up without losing any quality 🙂

Testing in UE4


Pretty happy with that, should look ok with some mesh variation, concrete skirting, etc!
I’ll still need to spend more time balancing brightness, etc.

For giggles, I brought in my wet material shader from this scene:



Not sure if I’ll be having a wet scene or not yet, but it does add some variation, so I might keep it 🙂

Oh, and in case you were wondering how I generated the vertex colour mask for the water puddles… #Procedural(tm)!


Exported out of Houdini, a bunch of noise, etc. You get the idea 🙂

Next up

Think I’ll do some vegetation scattering on the puddle plane in Houdini, bake out the distribution to vertex colours, and use it to drive some material stuff in UE4 (moss/dirt under the plants, etc).

And probably export the plants out as a few different unique models, and their positions to something that UE4 can read.

That’s the current plan, anyway 🙂


Shopping for masks in Houdini

January 20, 2016

Houdini pun there, don’t worry if you don’t get it, because it’s pretty much the worst…

In my last post, I talked about the masking effects in Shangri-La, Far Cry 4.

I mentioned that it would be interesting to try out generating the rough masks in Houdini, instead of painting them in Modo.

So here’s an example of a mask made in Houdini, being used in Unreal 4:


Not horrible.
Since it moves along the model pretty evenly, you can see that the hands are pretty late to dissolve, which is a bit weird.

I could paint those out, but then the more I paint, the less value I’m getting out of Houdini for the process.

This is probably a good enough starting point before World Machine, so I’ll talk about the setup.

Masky mask and the function bunch

I’ve exported the Vortigaunt out of Modo as an Alembic file, and bring it into Houdini.
Everything is pretty much done inside a single geometry node:


The interesting bit here is “point_spread_solver”. This is where all the work happens.

Each frame, the solver carries data from one vertex to another, and I just manually stop and bake out the texture when the values stop spreading.

I made the un-calculated points green to illustrate:


A note on “colour_selected_white”, I should really do this bit procedurally. I’m always starting the effect from holes in the mesh, so I could pick the edge vertices that way, instead of manually selecting them in the viewport.

The solver


Yay. Attribwrangle1. Such naming, wow.

Nodes are fun, right up until they aren’t, so you’ll often see me do large slabs of functionality in VEX. Sorry about that, but life is pain, and all that…

Here’s what the attrib wrangle is doing:

int MinDist = -1;

if (@DistanceFromMask == 0)
	int PointVertices[];
	PointVertices = neighbours(0, @ptnum);

	foreach (int NeighborPointNum; PointVertices)
		int success             = 0;
		int NeighborDistance    = pointattrib(

		if (NeighborDistance > 0)
			if (MinDist == -1)
				MinDist = NeighborDistance;

			MinDist = min(MinDist, NeighborDistance);

if (MinDist > 0)
	@DistanceFromMask = (MinDist + 1);

Not a very nuanced way of spreading out the values.

For each point, assuming the point has a zero “distance” value, I check the neighboring points.
If a neighbor has a non-zero integer “distance” value, then I take the lowest of all the neighbors, add one to it, and that becomes my “distance” value.

This causes the numbers to spread out over the surface, with the lowest value at the source points, highest value at the furthest distance.

Integers –> Colours

So, the vertices now all have integer distance values on them.
Back up in the mask image, the solver promotes the Distance value up to a Detail attribute, getting the Max Distance of all the points.

In the wrangle node under that, I just loop through all the points and divide each point’s Distance by the Max Distance, and use that to set the colour, or I set it as green if there’s no distance value:

if (@DistanceFromMask > 0)
    @Cd = float(@DistanceFromMask - 1) / float(@DistanceFromMaskMax);
    @Cd = {0,1,0};

So that produces the gif I showed earlier with the green on it.

Colours –> Textures

Time to jump into SHOPS. See? This is where my awesome title pun comes in.

As simple as it gets, vertex Colour data straight into the surface output:


In my “Out”, I’m using a BakeTexture node to bake the material into a texture, and I end up with this:



Bam! Work is done.
Still wouldn’t have been much point in doing this on Shangri-La, because painting masks in Modo is super quick anyway, but it’s fun to jump back into Houdini every now and then and try new things.

Has led to some other interesting thoughts, though.

  • For Shangri-La, we could have done that at runtime in a compute shader, and generated the mask-out effect from wherever you actually shot an arrow into an enemy.
    That would have been cool.
  • You could probably use Houdini Engine to put the network into UE4 itself, so you could paint the vertex colours and generate the masks all inside UE4.
  • You could do the “erosion” part in Houdini as well, even if you just subdivide the model up and do it using points rather than run it in image space (to avoid seams). Might be hard to get a great resolution out of it.
  • You could do an actual pressure simulation, something along the lines what this Ben Millwood guy did here. He’s a buddy of mine, and it’s a cool approach, and it’s better than my hacky min values thing.

Crumbling tiger, hidden canyon

January 9, 2016

In the world of Shangri-La, Far Cry 4, lots of things crumble and dissolve into powder.

For example, my buddy Tiges here:


Or, as Max Scoville hilariously put it, the tiger turns into cocaine… NSFW I guess… That gave me a huge chuckle when I first saw it.

I was not responsible for the lovely powder effects in the game, we had a crack team (see what I did there) of Tricia Penman, Craig Alguire and John Lee for all that fancy stuff.
The VFX were such a huge part of the visuals Shangri-La, the team did an incredible job.

What I needed to work out was a decent enough way of getting the tiger body to dissolve away.
Nothing too fancy, since it happens very quickly for the most part.


I threw together a quick prototype in Unity3d using some of the included library content from Modo:


I’m just using a painted greyscale mask as the alpha, then thresholding through it (like using the Threshold adjustment layer in Photoshop, basically).

There’s a falloff around the edge of the alpha, and I’m applying a scrolling tiled firey texture in that area.

I won’t go into it too much, as it’s a technique as old as time itself, and there are lots of great tutorials out there on how to set it up in Unity / UE4, etc.

As it turns out, there was already some poster burning tech that I could use, and it worked almost exactly the same way, so I didn’t need to do the shader work in the end:

You mentioned canyons?

I actually used World Machine to create the detail in the maps.
In the end, I needed to make about 30 greyscale maps for the dissolving effects on various assets.


I’ll use my Vortigaunt fellow as an example, since I’ve been slack at using him for anything else or finishing him (typical!).

First up, for most of the assets, I painted a very rough greyscale mask in Modo:


Then, I take that into World Machine, and use it as a height map.
And run erosion on it:


I then take the flow map out of World Machine, and back into Photoshop.
Overlay the flow map on top of the original rough greyscale mask, add a bit of noise to it.
With a quick setup in UE4, I have something like this:


Sure, doesn’t look amazing, but for ten minutes work it is what it is 🙂

You could spend more time painting the mask on some of them (which I did for the more important ones), but in the end, you only see it for a few dozen frames, so many of them I left exactly how they are.

Better-er, more automated, etc

Now that I have the Houdini bug, I would probably generate the rough mask in Houdini rather than painting it.


  • Set the colour for the vertices I want the fade to start at
  • Use a solver to spread the values out from these vertices each frame (or do it in texture space, maybe).
  • Give the spread some variation based off the roughness and normals of the surface (maybe).
  • Maybe do the “erosion” stuff in Houdini as well, since it doesn’t really need to be erosion, just some arbitrary added stringy detail.

Again, though, not worth spending too much time on it for such a simple effect.
A better thing to explore would be trying to fill the interior of the objects with some sort of volumetric effect, or some such 🙂
(Which is usually where I’d go talk to a graphics programmer)

Other Examples

I ended up doing this for almost all of the characters, which exception of a few specific ones (SPOILERS), like the giant chicken that you fight.
That one, and a few others, were handled by Nils Meyer and Steve Fabok, from memory.

So aside from those, and my mate Tiges up there, here’s a few other examples.

Bell Chains


Hard to see, but the chain links fade out 1 by 1, starting from the bottom.

This was tricky, because the particular material we were using didn’t support 2 UV channels, and the chain links are all mapped to the same texture space (which makes total sense).

Luckily, the material *did* support changing UV tiling for the Mask vs the other textures.

So we could stack all of the UV shells of the links on top of each other in UV space, like so:


So then the mask fades from 0 –> 1 in V.
In the material, if we had 15 links, then we need to tile V 15 times for Diffuse, Normal, Roughness, etc, leaving the mask texture tiled once.

Edwin Chan was working on the assets for these, and I could have just made him manually set that up in Max, but it would have been a bit of a pain, and I’d already asked him to do all sorts of annoying setup on the prayer wheels…

There were 3-4 different bell chain setups, and each of those had multiple LODs for each platform, so I wrote a Maxscript that would pack all the UVs into the correct range.

Quite a lot of work for such a quick effect, but originally the timing was a lot slower, so at that point it was worth it 🙂

Bow gems


Although I never really got this as on-concept as I would have liked, I’m pretty happy with how these turned out.

Amusingly, the emissive material didn’t support the alpha thresh-holding effect.

So there are two layers of mesh: the glowy one and the non-glowy one.
It’s actually the non-glowy layer that fades out!
The glowy stuff is always there, slightly smaller, hidden below the surface.

Dodgy, but got the job done 😛

Dead Space fan art: Necrotle

October 31, 2015


Right in time for Halloween, meet a Necrotle!
That’s a Dead Space Necromorph turtle, fyi.

I started on this guy about 5 years ago, while I was working at Visceral Games in Melbourne. I wasn’t on the Dead Space project(s), I just felt like doing some fan art, and decided to come up with the most silly idea for an animal Necromorph I could think of (a giraffe was also in the plans, at one point…) 🙂

As with many of my home projects, I got sick of it and shelved it for a while. Decided a few weeks ago to resurrect the little fella!
And now I’m sick of looking at it again, and I’m calling it done 😉

Started with a very basic sculpt in 3dcoat, then modeling, additional sculpting, texturing, rendering in Modo.

Rohan Dalvi + Shangri-La themed procedural islands

September 5, 2015


Over the last month or two, I’ve been working through a fantastic set of Houdini tutorials by Rohan Dalvi:

Rohan Dalvi Tutorials

They reminded me quite a lot of the floating islands in FarCry 4 Shangri-La (which I was lucky enough to work on, which I briefly mentioned here), so for a bit of fun I went with a “Shangri-La” theme.

I changed up quite a few things along the way, but the result is not far removed from a palette swap on original tutorials.
So please, check out the tutorials if you want to know how it’s done, it’s a mostly procedural workflow 🙂

More Shangri-La stuff

If you want to see some more of the concept for Shangri-La, you should check out Kay Huang’s portfolio here:


Here’s our art director Josh Cook chatting about how we ended up where we did, and the intention behind the art of Shangri-La:


Also, Jobye-Kyle Karmaker shared some insight into his level art process on the floating islands section of Shangri-La, and the reveal leading up to it:


Just a very small sample of the amazing art team we had on that project 🙂

Houdini? Who don’t-y?

June 24, 2015

I’ve been waiting about a year to use that blog post title. Don’t judge me…

I bought Houdini Indie about a year ago, and up until a few months ago I hadn’t used it.
In the last few months, I’ve started learning fracturing and pyro effects (smoke, fire, etc).

In this video, I’m fracturing an object and generating “smoke” (dust is the intention, but I haven’t added particles to it, so it definitely looks like smoke).

Brief background on Houdini fluid sims

Very brief, because I’m still learning 😛

In Houdini, you create volumetric fields of data that drive fluid simulations, much like in other software like FumeFX.

For a smoke sim, you can get away with just Density and Heat. The Density controls how much smoke gets added per frame (although like everything in Houdini, this is a loose definition). The Heat will move the smoke around using gas pressure simulations.

I’m also using a Velocity field, because it’s one way of getting the pieces of my fractured geometry to disturb the smoke as they move through the fluid.

Each piece of the fractured geometry is glued to pieces next to it using “glue constraints”. These break either when I manually break them, or when a certain amount of force is applied to them.

The goal of this scene

There are plenty of ways of setting up the Smoke Density, and the most common one I’ve seen is just adding Density to the fluid in places where geometry is moving at a certain speed.

Instead of that, I wanted to add dust when a constraint breaks (based off the mass of the pieces), and only add it to the sim if the piece is moving above a certain speed.

The end results are not a great deal different, but there’s a few things I like:

  • Small pieces can shed all their dust before they hit the ground. You don’t end up with streamers of dust all the way to the ground just because something is moving fast.
  •  There’s good variation in the amount of smoke/dust that pieces generate, due to the mass being factored in.
  • A group of pieces can fall off as a chunk, generating some smoke for a few frames. When that chunk hits the ground and breaks again, the broken constraints can generate more smoke. This could look really nice if I had a more complicated scene setup 🙂

The setup

This is what my scene looks like:


There is a tube that I fracture, a ground plane, two simulations (fracturing and the smoke fluid sim) and “SmokeSource” which is where I generate the fields for the fluid simulation.

Tube object (fracture setup)

I won’t go too much into the Fracture setup, because it’s pretty standard, but here’s what that network looks like:


So the top bit does a voronoi fracture on the geometry, the middle bit adds a “depth” value attribute, which is how far each point is from the original surface of the object (I intended to use this for something, but then… didn’t).
The left side sets up which pieces of geo are active, using a box to select the ones I want (everything except the base of the cylinder, basically). The right side creates the glue constraints.
There’s a few File nodes to cache things out to disk.

Most of this is set up through standard shelf tools.

Collapse Sim


Again, pretty basic stuff, most of this is created when you use shelf tools to setup a sim.
The only interesting bits in this are the “Geometry Wrangle” node at the top, and a few things I added to the “Remove Broken” solver.


This  is where I’m doing most of the dust setup work (although probably shouldn’t, more on that later…).

Here’s the VEX code:

vector c = point("op:/obj/tube_object1/OUT_ACTIVEPOINTS", "Cd", @ptnum);
i@active = (int)c.r;

float DustPerKilo = 0.2;
float DustLiberatedPerMetrePerSecond = 350.0;
float MinimumSpeedForDustLiberation = 0.4;
float MaximumSpeedForDustLiberation = 6.0;
float LiberatedDustDissipationRate = 60.0;
string GlueConstraintPath = "op:/obj/CollapseSim:Relationships/glue_tube_object1/constraintnetwork/Geometry";
string GeoPath = "op:/obj/CollapseSim:tube_object1/Geometry";

int NumPieceAttributes = 2;

for (int PieceCount = 1; PieceCount <= NumPieceAttributes; PieceCount++)
	* This is horrible, and would break down for constraints that had more than 2 pieces...
	* Attributes are "Piece1, Piece2" the first time through. "Piece2, Piece1" the next
	string AttributeToFind = "Piece" + itoa(PieceCount);
	string AttachedPieceAttribute = "Piece" + itoa((PieceCount%NumPieceAttributes) + 1);

	// First, get the number of glue constraints that have this piece as "piece 1"
	int NumberOfGlues = findattribvalcount(GlueConstraintPath, "prim", AttributeToFind, @ptnum);

	int ConnectedPieces[] = {};

	for (int Count = 0; Count < NumberOfGlues; Count++)
		int Success;
		int CurrentGlueConstraintIndex = findattribval(GlueConstraintPath, "prim", AttributeToFind, @ptnum, Count);

		int PieceVertIndex = primattrib(GlueConstraintPath, AttachedPieceAttribute, CurrentGlueConstraintIndex, Success);
		string PieceName = pointattrib(GlueConstraintPath, "name", PieceVertIndex, Success);
		string Bits[] = split(PieceName, "/");
		ConnectedPieces[len(ConnectedPieces)] = atoi(re_find("([0-9]+)", Bits[1]));

int RemovedPieces[] = {};
float RemovedPieceMass = 0.0;

// Check to see if any constraints have been removed
foreach(int PreviousPieceIndex; i[]@aConnectedPieces)
	int found = 0;

	// Search current array against last, etc
	foreach(int CurrentPieceIndex; ConnectedPieces)
		if (CurrentPieceIndex == PreviousPieceIndex)
			found = 1;

	// For every broken constraint, add some dust    
	if (found == 0)
		RemovedPieces[len(RemovedPieces)] = PreviousPieceIndex;

		int Success;
		RemovedPieceMass = RemovedPieceMass + pointattrib(GeoPath, "mass", PreviousPieceIndex, Success);

* Increase the dust amount if we have removed some pieces and use the mass
* of those pieces to scale how much dust is generated
if (RemovedPieceMass > 0.0)
	@DustAmount = @DustAmount + (DustPerKilo * RemovedPieceMass);

f@VelocityMag = length(@v);

// Disperse the freed up dust a little each frame
if (@DustLiberated > 0.0) @DustLiberated = max(@DustLiberated - LiberatedDustDissipationRate, 0.0);

* Based off the speed of this piece, transfer some
* of the Dust to "liberated".
* This allows the dust to be used up over a number
* of frames, faster for fast moving pieces
float VelocityMultiplier = (f@VelocityMag - MinimumSpeedForDustLiberation) / (MaximumSpeedForDustLiberation - MinimumSpeedForDustLiberation);
VelocityMultiplier = clamp(VelocityMultiplier, 0.0, 1.0);

float DustAmountToLiberate = VelocityMultiplier * DustLiberatedPerMetrePerSecond;
DustAmountToLiberate = min(DustAmountToLiberate, @DustAmount);
@DustLiberated = @DustLiberated + DustAmountToLiberate;
@DustAmount = @DustAmount - DustAmountToLiberate;

addvariablename(geoself(), "DustLiberated", "DUSTLIBERATED");

// Store the connected pieces as an attribute (used when comparing between frames)
i[]@aConnectedPieces = ConnectedPieces;

The first loop is pretty ugly to look at. It used to be two separate loops with a bunch of copy-pasted code, not sure it’s any better now that I “cleaned” it up.

Anyway, this code searches through all the Constraints in the scene, and finds any constraint that is connected to the current piece

For each Constraint it finds, it keeps track of the piece of geometry that this one is connected to, and puts it into a “connected pieces” array.

The “connected pieces” array is stored on the geometry as an attribute. Each frame the sim runs, you have access to the previous attribute values in this Geometry Wrangle.
If a piece was connected last frame, but not this frame I use the mass of the no longer connected piece to add a “DustAmount” to our current piece.

Each frame I transfer a bit of the DustAmount (if there is any) to “DustAmountLiberated” based on the velocity of the piece. This “DustAmountLiberated” is what I’m using to create the smoke density.

Phew! So not exactly neat code, sorry about that, but hopefully that makes sense.

Remove Broken solver


Nothing very exciting here, but each frame I have a sphere that expands that deletes constraint primitives.
It leaves the points alone, because I still need to look up the points for constraints that have been broken, so keeping the points makes life easier 🙂

After a bunch of frames, it looks like a packman cylinder. I think that warrants a screenshot:



This is another attribute wrangle, which checks to see if the constraint is marked for delete, or if either point in the constraint is marked for delete (by the big sphere of death).
If any of that is true, the whole primitive if marker with the “ToDelete” attribute.

int Point1 = primpoints(0, @primnum)[0];
int Point2 = primpoints(0, @primnum)[1];

i@Point1Delete = point(0, "StuffToDelete", Point1);
i@Point2Delete = point(0, "StuffToDelete", Point2);

i@ToDelete = (i@ToDelete || i@Point1Delete || i@Point2Delete);

Smoke Source


This network imports the results of the Collapse sim, so that it can generate the fields that I need to pass to the Smoke Simulation.
I mentioned that I’m using heat, density and velocity, but I’m actually just using the density as heat. That makes no sense, but I didn’t bother coming up with a better plan 🙂

Anyway, the Density is generated just from chunks of geo with “liberated dust” amounts.
The Velocity is generated from all geometry pieces:


The Velocity field is kinda cute.


Network wise, there’s not a lot fancy here. A little bit of hackery to avoid errors on the first frame, because the DustLiberated attribute doesn’t exist at that time (hence the switch node, which just uses a condition of “if we are on the first frame do X”, where X is ignore all the geo).

Probably worth noting that for the density, I’m using points scattered on the surface of the geometry, but I’m deleting the exterior faces, because they are never connected to anything 🙂

Smoke Sim


Nothing very exciting here either, pretty much a standard pyro shelf setup with a wind node thrown in.
I also added a switch node so I could quickly change between a few fluid grid setups for quick previews.

Well that was fun!

So that’s it! Sorry it was a bit of a wall of text.

This was a fun exercise, although it took me a long time to sort this all out, it really helped me learn more about pyro sims, and Houdini in general.

Aside from making an actual scene to destroy, creating particles for the dust, and tweaking the fluid sim settings to make it better, there’s a few things I thought of half way through this that I’d like to try:

  • Use surface area instead of mass to drive the amount of dust.
  • Combined with the above, instead of generating the dust all over the piece of geometry, I could convert the two pieces of geo to volumes, intersect those volumes and generate the dust only on the intersecting places.
  • The turbulence looks horrible. Yuck. It looks like the smoke is wriggling about in jelly (or jello for those living in America)
  • I think I could probably move the dust calculations into SmokeSource. Currently, if I want to tweak dust amounts, I need to re-sim just about anything, which is annoying.
  • Non linear reduction of dust amount might be nice, sometimes the dust cutoff is a bit sudden

So… Are you still doing Unreal and Half Life inspired stuff, or did you just get bored and wander off?…

Yeah. Well.
So I was intending to use Houdini to do a bunch of stuff for that, but we’ll see 🙂

The first thing I started trying out (when I had no idea what I was doing) was smashing up my chamber:

Rising damp

March 22, 2015

Wet ground

Complicated materials are a per-pixel, per frame cost, so it’s not always easy to justify making them.

For example, my tiling vertex-painted metal materials:
This could easily have just been a uniquely unwrapped 2048 tiled texture set, with the detail painted where I want it. The way I set it up allows for memory savings, and re-usability, but sacrifices performance for that.

The best justification for complex materials is for surfaces that change / react to the environment and/or gameplay, because then you don’t really have a choice but to make an expensive material 🙂

I’ve been planning for a while to make some good examples of more necessary complex materials. I started out on a desert scene with swirling sands, but that didn’t really end up where I wanted it, so I’ve gone with a slightly more standard “wet ground” effect:

I won’t go into detail on the Blueprint setup, it just increases / decreases the water level as the player stands in front of the floating red valve. There is a volume around it, and as you move in and out of the volume, the direction of water flow changes.

The water height value feeds into this material:
It’s messy, but can be summarized roughly:

  • Two layers of materials for the ground texture (I’m using a muddy/rocky material and a grass material), each with:
    • Normals
    • Height
    • Albedo
    • Roughness
    • Ambient Occlusion
  • Vertex colours for blending between the two layers
  • Vertex colour used for keeping some areas dryer than others, to break up the effect
  • A normal map and colour for the water surface (blended in by depth)

I know it’s not an entirely useful image, but here is the material:


All of the more important bits which are in material functions a bit further down. If anyone wants to see the whole network, I’ll take the time to screenshot it properly, and stitch it all together 🙂

Here’s a view showing some of the areas broken up with vertex painting (the two material layers, some dry areas, etc):


The two layer blending is mostly identical in setup to the checker plate material I mentioned at the top.
The only difference being the height values, which get blended together and this resulting height is the main input for the water surface (as well as for breaking up the blending, so that tall rocks stick out a bit from the vertex blend).

Modo the things until they are Modo’d

I decided to make all of the textures procedurally.
A smarter man probably would have used the Substance tools, or ZBrush, or Houdini. Or photo source for that matter.

I’m not that guy, I’m Modo guy!! 😛


So I used two layers of fur to make the grass, along with a bunch of procedural noise layers.
Nothing too exciting, but here’s the shader network:


The muddy rocky stuff was a rock in a replicator and a whole bunch of “flow bozo” displacement maps, because they are awesome:


So yeah, I’m not going to win any amazing texture art of the year awards.
I could spend a bit more time on them, or use them as a paint-over base, but I’ve got pretty lazy with this side project now so I didn’t even bother fixing seams 🙂

But, at least it gives me pretty accurate height data to play with, which is important for water-ness!

Water Level

The current water level is passed into the material, and is used to threshold the height map, to work out where the water is.
It might help to visualise this in Photoshop. So let’s pretend we have a height map that has a bunch of dents in it, and the dents will fill up first:


If you put a folder above it with a Threshold adjustment layer (and an invert), you can drag the threshold around to see the water level rise (black is no water, white is water):


This is essentially how I’m controlling the water level in the material, but I’m not clamping the values to give a hard edge. I’ve moved this into a function, to clean up the main material graph a little:


As a side note, the first shader of mine I saw running in a game was a threshold effect like this to make oil run down the side of a plane.
It was for Heroes Over Europe, and was on one of the programmers’ machines, and was ousted for a better approach almost immediately. I was very grateful that she got it in game for me, up until that point I’d just been throwing shaders at 3dsMax. It set me on the path for doing quite a bit of shader work for the next few years 🙂

You’ll notice I’m putting out two return values here: Mask and Depth. The Depth is very similar to the mask, but does not use the falloff value, so it essentially “how far is this pixel from the current water level”. I use this Depth value to tint the water with a bias, so that I can have muddy puddles that are a clearer where they are shallow.
It’s pretty subtle, so it may be an unnecessary complication, but here’s an example making it a little more obvious:


The water also has a sine wave running over the height, just to give it a little bit of ebb and flow.


Right, so, with the water height determined, I can then use the depth of the water for a fake refraction effect.
This is usually where I’d pull out the BumpOffset node, but it uses height maps, and I had a Normal map handy for the water surface.
I made a simple normal based parallax function, just because I’ve had good results with this for various materials (including the UI) on Ashes Cricket 2009, and various previous attempts at rivers and water effects in other games.

Although I’m only using a single transparent layer, my go-to paper has always been “Rendering Gooey Materials with Multiple Layers” by Chris Oat, from Siggraph 2006, just because it has a really nice clear example for parallax offset.

So here’s my parallax offset function:


Please pretend that “Vector_Reflect” is a “CustomReflectionVector” node, btw.

I rolled my own vector reflection node because I didn’t notice CustomReflectionVector… I think I saw that it had an input of CameraVector, and that threw me off, and I’ve only just realised while writing this blog post…

So the parallax function outputs distorted UVs, and these UVs are used to look up the colour textures for the grass and mud. The water normal is just scrolling in one direction, but that seems to give a good enough distortion effect.

Taking it further

So, it was fun to work on for a few days, but there’s plenty of things to do to expand/improve on it visually (including getting a real artist to make the textures :P):

  • Use UE4’s flow maps to make the water flow around objects.
  • Use the back buffer (or last frame) as an input to the shader for refraction. This would be necessary if you wanted to have things sink into the water a little, and be refracted.
  • Get lighting to work above and below water (lighting is done based off the water surface normal, currently). This might be fixed by the previous improvement, if I can render and light the below water layer, then render the water surface using forward rendering and distort the already lit stuff below.
    Multiple lit layers are always a bit of a pain in deferred rendering.
  • It would be really cool if I could have a dynamic texture that I could render height values into, and multiply it on top of the height in the material.
    That way, I could create dynamic ripples, splashes, impact effects, etc!
    Not really sure how I’d go about that in UE4, but it would be neat.

I’m not going to do any of those things, however, because I need to stop getting distracted and get back to my Half Life scene.
Hopefully more on that soon, but this has been a nice side-track 🙂