City Scanner

August 27, 2016

Since I had so much fun with the last Modo / Substance project I did, thought I’d do another one🙂

This time, I decided to make a City Scanner, from Half Life 2.
It’s a work in progress, and I’ll keep posting regular screenshots up on my twitter, but here’s where I’m currently at:


I could have been smart, and just grabbed the model from Source and built around it, but I need to practice building things from scratch, so I built it based off a bunch of screenshots I grabbed out of the game.

It has quite a few differences to the original, which I’m going to pretend was due to creative license, rather than me screwing up proportions, etc (I particularly hate the green side panel, and some of the rear details, but I’m not going to fix the modelling on those at this point) …

Building the model

As with everything I do, this was built as an edge-weighted Catmull-Clark subdivision surface, in Modo 10.

Whenever working on these things, I tend to throw some basic Modo procedural materials and render them out, so here’s where I was at by the end of the high poly process:


Once I was happy with the model (read: sick of working on it :P), I created the low poly mesh for it, and unwrapped the thing.

Unwrapping aside, this didn’t take a huge amount of time, because I just used the base sub-d cage, and stripped out a bunch of loops.
It’s pretty heavy still, at about 7000 vertices, but it’ll do!

Painter work

I could have baked the procedural materials out of Modo, and painted over the top of them, etc (Modo actually has some great baking and painting tools these days), but I need to keep using painter more.

Probably the largest amount of time I spent from this point on was splitting the high and low poly up into lots of different meshes so that I could bake all the maps I needed in Substance Painter.

Models with lots of floating, yet welded intersecting parts are a little bit of a pain for this sort of thing, but I got there eventually.

From Modo, I baked out a Surface ID mask (actually, I used a Diffuse render output, and had flood fill colours on all my materials, but I use it as a Surface ID mask in Painter):


For each of the colour blocks, I set up a folder in Painter that had a Colour
Selection mask on it:


And then I just stack up a bunch of flood fill colour layers with masks until I’m happy.

There’s not a lot of actual painting going on here, at this point, although I do always paint out some parts of the procedural masks, because having even edge wear across the whole model looks pretty silly.

That said, smart masks with flood fill layers aren’t a bad way to flesh out basic wear and tear, etc:


I still need to paint out more of the wear and tear on my model, and put more colour variation in, it looks a little like it has been in a sandstorm, then thrown down some stairs🙂



Aside from some issues with Reflection Capture Actors (having emissive materials in a scene can mess them up a bit), I really didn’t do much except throw the exported textures from Substance onto the mesh, and put a few lights in.

I did mess about with the texels per pixel, min and fade resolutions, and radius thresholds of the shadow casters a bit, because the default settings for shadows in UE4 are pretty low quality for some reason, even on Epic settings.

The material is really boring at the moment, the only thing it exposes is a multiplier for emissive:


Next steps

I will probably animate this in UE4 at some point, and have it floating around, flashing lights, etc.
And it will end up as a minor piece in an environment at some point, hopefully :)

For now, though, I want to continue down the fun path of Modo sub-d/Substance, so I will probably start working on a new model.

Watch this space, and/or twitter 🙂



Modo 10 on the move

June 19, 2016

A month ago, I had a fun adventure taking a train across Canada (which I can highly recommend, by the way).

I’ve moved from Toronto to Vancouver, so I’ve been sans PC for a few months.

Never fear, though, I could still run Modo on my trusty Surface Pro 1 :)


One of the stops along the way was in Winnipeg.
I had two tasks while there, getting some t-shirts, and finding something to model in Modo (well ok, three, if you include a milkshake at VJ’s).

I decided on this auto-sprinkler thing:


The plan was to do most of the modelling work with standard Pixar sub-d stuff in Modo 901 while on the train.

After I arrived in Vancouver, though, I upgraded to Modo 10, which gave me some fun new tools to play with!

Procedural Text

Non destructive modelling along the lines of Max’s stack, and/or Maya’s history is something that has been discussed a long time in Modo circles, and it has landed in Modo 10!

So, once the main mesh was finished, I could select an edge loop in the sub-d mesh, use Edges to Curves to create a curve to place the text around.

Then, in a new procedural text item, I reference in the curve, and use it with a Path generator and a path segment generator to wrap the text around the sprinkler base plate:


I couldn’t work out a procedural way to get those letters rotated correctly, so I just fixed that up manually afterwards.

Fusey fuse

Since I wanted the text to be extruded from the surface and to look like it is all one piece, I decided to use Modo’s Mesh Fusion to Boolean the text on:


Since the mesh was a sub-d mesh, I didn’t really need to make a low poly, I just used the cage.
Well… Technically I should probably still make a low poly (the cage is 3500 vertices, which is pretty heavy), but it’s amazing what you can get away with these days, and soon we will have edge weighted sub-d in all major engines anyway (we won’t… But if I say it enough times, maybe it will happen??):


At this point, I unwrapped the cage, to get the thing ready for texturing.

Substance Painter time

I won’t go too much into the process here, because my approach is generally along the lines of: stack up dozens of procedural layers, and mess about with numbers for a few hours…

Since I could not be bothered rendering out a Surface ID map from Modo, I quickly created some base folders with masks created from the UV Chunk Fill mode in the Polygon Fill tool.

So in about 10 minutes I had a base set of folders to work with, and some materials applied:


Hello weird bronze liney thing.
Looks like someone forgot to generate world space normal maps…

Anyway, I went with a fairly standard corroded bronze material for the main body, and tweaked it a little.
Then added a bunch more procedural layers, occasionally adding paint masks to break them up here and there when I didn’t feel like adding too many more procedural controls.

There’s about 30 layers all in all, some on pretty low opacity:


And here’s what I ended up with in Painter:


Pretty happy with that🙂
Could do with some more saturation variation on the pink bits, and the dirt and wear is a bit heavy, but near enough is good enough!

Giant hover sprinkler of doom

And here it is in UE4, really large, and floating in the air, and with a lower resolution texture on it (because 2048 is super greedy :P):


Speaking of UE4: Modo 10 has materials that are compatible with the base materials in Unreal and Unity now, so you can have assets look almost identical between the two bits of software.

Which is pretty neat. I haven’t played with that feature, but I feel like it will be particularly nice for game artists who want to take Unreal assets into Modo, and render them out for folio pieces, etc.

Factory – pt 4 – (Trimming the flowers)

April 10, 2016

Part 4 of


Alpha card objects

In most games, you have some objects that have on/off alpha transparency, generally for objects that you wouldn’t model all the detail for (leaves, flowers, etc).


^ Exactly like that, beautiful isn’t it?
Years of art training not wasted at all…

Also referred to as punch-through / 1-bit / masked materials, btw.
So, you can see that the see-through portion of that polygon is pretty large.

When rendering these types of assets, you are still paying some of the cost for rendering all of those invisible pixels. If you are rendering a lot of these on screen, and they are all overlapping, that can lead to a lot of overdraw, so it’s fairly common to cut around the shape to reduce this, something like this:


What does this have to do with the factory?
Aren’t you supposed to be building a factory?

I get distracted easily…

I’m not really planning to have a lot of unique vegetation in my factory scene, but I am planning on generating a bunch of stuff out of Houdini.

When I create LODs, that will be in Houdini too, and the LODs will probably be alpha cards, or a combination of meshes and alpha cards.

When I get around to doing that, I probably don’t want to cut around the alpha manually, because… Well, because that sounds like a lot of work, and automating it sounds like a fun task🙂

Houdini mesh cutting tool

The basic idea is to get my image plane, use voronoi fracture to split up the plane, delete any polygons that are completely see-through, export to UE4, dance a happy dance, etc.

For the sake of experiment, I want to try a bunch of different levels of accuracy with the cutting, so I can find a good balance between vertex count, and overdraw cost.

Here’s the results of running the tool with various levels of cutting:


Here’s what the network looks like, conveniently just low resolution enough so as to be totally no use… (Don’t worry, I’ll break it down :))


The first part is the voronoi fracture part:


I’m subdividing the input mesh (so that I end up with roughly a polygon per pixel), then use an Attribute VOP to copy the alpha values from the texture onto the mesh, then blur it a bunch:


I scatter points on that, using the alpha for density, then I join it with another scatter that is just even across the plane. This makes sure that there are enough cuts outside the shape, and I don’t get weird pointy polygons on the outside of the shape.

Here is an example where I’ve deliberately set the even spread points quite low, so you can see the difference in polygon density around the edges of the shape vs inside the shape:


Counting up the alpha

So, earlier, I mentioned that I subdivided up the input mesh and copied the alpha onto it?
I’ll call this the pixelated alpha mesh, and here’s what that looks like:



Next, I created a sub network that takes the pixelated alpha mesh, pushes it out along its normals (which in this case, is just up), ray casts it back to the voronoi mesh, and then counts up how many “hits” there are on each voronoi polygon.

Then we can just delete any polygon that has any “hits”.

Here is that network:


After the ray sop, each point in the pixelated alpha mesh has a “hitprim”, which will be set to the primitive id that it hit in the voronoi mesh.

I’m using an Attribute SOP to write into a integer array detail attribute on the voronoi mesh for each “hitprim” on the pixelated alpha mesh points, and here’s that code:

int success = 0;
int primId = pointattrib(0, "hitprim", @ptnum, success);

int primhits[] = detail(0, "primhits");

if (primId >= 0)
setcomp(primhits, 1, primId);
setdetailattrib(0, "primhits", primhits, "add");

After all that stuff, I dump a “remesh” node, which cheapens up the mesh a lot.

And back to UE4…

So, with all the above networks packaged into a Digital Asset, I could play with the parameters (the two scatter values), and try out a few different levels of cutting detail, as I showed before:


I’m creating a rather exaggerated setup in UE4 with an excessive amount of overdraw, just for the purposes of this blog post.

For now, I’ve made the alpha cards huuuuuuuuge, and placed them where I used to have the flowers in my scene:


Then, all I need to do is swap in each different version of my alpha card, and then GPU profile!


The camera shot, without any alpha plane cutting optimization, took about 12 ms.

Test1, which is 27 vertices, seemed to be the best optimization. This came in at about 10.2 ms, so a saving of more than 1.5 ms, which is pretty great!

I was actually expecting Test2 to be the cheapest, since it chops quite a bit more off the shape, and at 84 vertices I didn’t think the extra vertex cost would even register on a GTX 970. Turns out I was wrong, Test2 was marginally more expensive!

This just goes to show, never trust someone about optimization unless they’ve profiled something😛

Test3, at 291 vertices, costs about another 0.3 ms.



Of course, the savings are all quite exaggerated, but in a “real world” scenario I would probably expect to have a lot more instances, all a lot smaller. In which case, going with the lower vertex count mesh seems like it would still make sense (although I will, of course, re-profile when I have proper meshes).

Lots of more fun things to do with this: get it working on arbitrary meshes (mostly working), see if I can use Houdini engine and integrate it into UE4, etc.
Still not sure how much vegetation I’ll have in my factory scene, but I think this will still be useful🙂




Factory – pt 3 (early optimisation is something something)

March 3, 2016

Part 3 of

Optimizing art early, before you have a good sense of where the actual expense of rendering your scene is, can be a pretty bad idea.

So let’s do it!!


I’ll do it #procedurally.
Sort of.

20 gallons of evil per pixel

My ground shader is pretty expensive. It’s blending all sorts of things together, currently, and I still have things to add to it.

I don’t want to optimize the actual material yet, because it’s not done, but it looks like this and invokes shame:


As a side note here, this material network looks a bit like the Utah teapot, which is unintentionally awesome.

Every pixel on this material is calculating water and dirt blending.

But many of those pixels have no water or dirt on them:


So why pay the cost for all of that blending across the whole ground plane?
What can I do about it?

Probably use something like the built in UE4 terrain, you fool

Is probably what you were thinking.
I’m sure that UE4 does some nice optimization for areas of terrain that are using differing numbers of layers, etc.

So you’ve caught me out: The technique I’m going to show off here, I also want to use on the walls of my factory, I just haven’t built that content yet, and I thought the ground plane would be fun to test on🙂

Back to basics

First up, I want to see exactly how much all of the fancy blending is costing.

So I made a version of the material that doesn’t do the water or the dirt, ran the level and profiled them side by side:


^ Simple version of my material vs the water and dirt blending one.


So, you can see above that the material that has no dirt/water blending is 1.6 milliseconds cheaper.

Now, if I can put that material on the areas that don’t need the blending, I can’t expect to get that full 1.6 milliseconds back, but I might get 1 millisecond back.

That might not sound like much, but for a 60 fps game, that’s about 1/16th of the entire scene time.

Every little bit helps, getting that time back from cutting content alone can take many hours🙂

Splitting the mesh

To put my cheap material onto the non-blending sections, I’ll split the mesh around the areas where the vertex colour masks have a value of 0.

Luckily, the ground plane is subdivided quite highly so that it plays nice with UE4 tessellation and my vertex painting, so I don’t need to do anything fancy with the mesh.

Back to Houdini we go!


So, anything that has > 0 sum vertex colour is being lifted up in this shot, just to make it obvious where the mesh split is happening.

Here’s the network:


The new nodes start at “Attribcreate”, etc.

The basic flow is:

  • “Colour value max” is set as max(@Cd.r, @Cd.g), per point, so it will be set to some value if either dirt or water are present.
  • Two new Max and Min attributes per polygon are created by promoting Colour Value max from Point –> Polygon, using Min and Max promotion methods (so if one vertex in the polygon has some dirt/water on it, the then max value will be non zero, etc)
  • The polygons are divided into three groups: Polygons that have no vertices with any blending, Polygons that have some blending, Polygons that have all verts that are 100% blending.
  • NOTE: For the purposes of this blog post, all I really care about is if the Polygon has no dirt/water or if it has some, but having the three groups described above will come in handy in a later blog post, you’ll just have to trust me🙂
  • The two groups of polygons I care about get two different materials applied to them in Houdini.
    When I export them to UE4, they maintain the split, and I can apply my cheaper material.

So, re-exported, here it is:

Looks the same?

Great, mission successful! Or is it…

Checking the numbers

Back to the GPU profiler!


Ok, so the column on the right is with my two materials, the column in the middle is using the expensive material across the whole ground plane.

So my saving was a bit under one millisecond in this case.
For an hour or two of work that I can re-use in lots of places, I’m willing to call that a success🙂

Getting more back

Before cleaning up my shader, there’s a few more areas I can/might expand this, and some notes on where I expect to get more savings:

  • I’ll have smaller blending areas on my final ground plane (less water and dirt) and also on my walls. So the savings will be higher.
  • I might mask out displacement using vertex colours, so that I’m not paying for displacement across all of my ground plane and walls.
    The walls for flat sections not on the corner of the building and/or more than a few metres from the ground can go without displacement, probably.
  • The centre of the water puddles is all water: I can create a third material that just does the water stuff, and split the mesh an extra time.
    This means that the blending part of the material will be just the edges of the puddles, saving quite a lot more.

So all in all, I expect I can claw back a few more milliseconds in some cases in the final scene.

One final note, the ground plane is now three draw calls instead of one.
And I don’t care.
So there.🙂







Factory – pt 2 (magical placeholder land)

February 17, 2016

Part 2 of:


I had to split this post up, so I want to get this out of the way:
You’re going to see a lot of ugly in the post. #Procedural #Placeholder ugly🙂

This post is mostly about early pipeline setup in Houdini Python, and UE4 c++.

Placeholder plants

For testing purposes, I made 4 instances of #procedural plants using l-systems:


When I say “made”, I mean ripped from my Shangri-La tribute scene, just heavily modified:

Like I mention in that post, if you want to learn lots about Houdini, go buy tutorials from Rohan Dalvi.
He has some free ones you can have a run through, but the floating islands series is just fantastic, so just buy it😛

These plants I exported as FBX, imported into UE4, and gave them a flat vertex colour material, ’cause I ain’t gonna bother with unwrapping placeholder stuff:


The placeholder meshes are 4000 triangles each.
Amusingly, when I first brought them in, I hadn’t bothered checking the density, and they were 80 000 + triangles, and the frame rate was at a horrible 25 fps😛

Houdini –> UE4

So, the 4 unique plants are in UE4. Yay!

But, I want to place thousands of them. It would be smart to use the in-built vegetation tools in UE4, but my purpose behind this post is to find some nice generic ways to get placement data from Houdini to UE4, something that I’ve been planning to do in my old Half Life scene for ages.
So I’m going to used Instanced Static Meshes🙂

Generating the placements

For now, I’ve gone with a very simple method of placing vegetation: around the edges of my puddles.
It will do for sake of example. So here’s the puddle and vegetation masks in Houdini (vegetation mask on the left, puddle mask on the right):


A couple of layers of noise, and a fit range applied to vertex colours.

I then just scatter a bunch of points on the mask on the left, and then copy flowers onto them, creating a range of random scales and rotations:


The node network for that looks like this:


Not shown here, off to the left, is all the flower setup stuff.
I’ll leave that alone for now, since I don’t know if I’ll be keeping any of that🙂

The right hand side is the scattering, which can be summarized as:

  • Read ground plane
  • Subdivide and cache out a the super high poly plane
  • Move colour into Vertex data (because I use UVs in the next part, although I don’t really have to do it this way)
  • Read the brick texture as a mask (more on that below)
  • Move mask back to Point data
  • Scatter points on the mask
  • Add ID, Rotation and Scale data to each point
  • Flip YZ axis to match UE4 (could probably do this in Houdini prefs instead)
  • Python all the things out (more on that later)

Brick mask

I mentioned quickly that I read the brick mask as a texture in the above section.
I wanted the plants to mostly grow out of cracks, so I multiplied the mask by the inverted height of the bricks, clamped to a range, using a Point VOP:


And here’s the network, but I won’t explain that node for node, it’s just a bunch of clamps and fits which I eyeballed until it did what I wanted:


Python all the things out, huh?

Python and I have a special relationship.
It’s my favourite language to use when there aren’t other languages available.

Anyway… I’ve gone with dumping my instance data to XML.
More on that decision later.

Now for some horrible hackyness:

node = hou.pwd()
from lxml import etree as ET

geo = node.geometry()

root = ET.Element("ObjectInstances")

for point in geo.points():
pos         = point.position()
scale       = hou.Point.attribValue(point, 'Scale')
rotation    = hou.Point.attribValue(point, 'Rotation')
scatterID   = "Flower" + repr(hou.Point.attribValue(point, 'ScatterID')+1)

PosString       = repr(pos[0]) + ", " + repr(pos[1]) + ", " + repr(pos[2])
RotString       = repr(rotation)
ScaleString     = repr(scale) + ", " + repr(scale) + ", " + repr(scale)

ET.SubElement(root, scatterID,

# Do the export
tree = ET.ElementTree(root)
tree.write("D:/MyDocuments/Unreal Projects/Warehouse/Content/Scenes/HoudiniVegetationPlacement.xml", pretty_print=True)

NOTE: Not sure if it will post this way, but in Preview the tabbing seems to be screwed up, no matter what I do. Luckily, programming languages have block start and end syntax, so this would never be a prob… Oh. Python. Right.

Also, all hail the ugly hard coded path right at the end there🙂
(Trust me, I’ll dump that into the interface for the node or something, would I lie to you?)

Very simply, this code exports an XML element for each Point.
I’m being very lazy for now, and only exporting Y rotation. I’ll probably fix that later.

This pumps out an XML file that looks like this:

<Flower1 Location=-236.48265075683594, -51.096923828125, -0.755022406578064 Rotation=(0.0, 230.97622680664062, 0.0) Scale=0.6577988862991333, 0.6577988862991333, 0.6577988862991333/>


Reading the XML in UE4

In the spirit of slapping things together, I decided to make a plugin that would read the XML file, and then add all the instances to my InstancedStaticMesh components.

First up, I put 4 StaticMeshActors in the scene, and in place I gave them an InstancedStaticMesh component. I could have done this in a Blueprint, but I try to keep Blueprints to a minimum if I don’t actually need them:


As stated, I’m a hack, so the StaticMeshActor needs to be named Flower<1..4>, because the code matches the name to what it finds in the XML.

The magic button

I should really implement my code as a either a specialized type of Data Table, or perhaps some sort of new thing called an XMLInstancedStaticMesh, or… Something else clever.

Instead, I made a Magic Button(tm):


XML Object Loader. Probably should have put a cat picture on that, in retrospect.

Brief overview of code

I’m not going to post the full code here for a bunch of reasons, including just that it is pretty unexciting, but the basic outline of it is:

  1. Click the button
  2. The plugin gets all InstancedStaticMeshComponents in the scene
  3. Get a list of all of the Parent Actors for those components, and their labels
  4. Process the XML file, and for each Element:
    • Check if the element matches a name found in step 3
    • If the Actor name hasn’t already been visited, clear the instances on the InstancedStaticMesh component, and mark it as visited
    • Get the position, rotation and scale from the XML element, and add a new instance to the InstancedStaticMesh with that data

And that’s it! I had a bit of messing around, with originally doing Euler –> Quaternion conversion in Houdini instead of C++, and also not realizing that the rotations were in radians, but all in all it only took a hour or two to throw together, in the current very hacky form🙂

Some useful snippets

The FastXML library in UE4 is great, made life easy:

I just needed to create a new class inheriting from the IFastXmlCallback interface, and implement the Process<x> functions.

I’d create a new instance in ProcessElement, then fill in the actual data in ProcessAttribute.

Adding an instance to an InstancedStaticMeshComponent is as easy as:


And then, in shortened form, updating the instance data:

FTransform InstanceTransform;
_currentStaticMeshComp->GetInstanceTransform(_currentInstanceID, InstanceTransform);

// ...


_currentStaticMeshComp->UpdateInstanceTransform(_currentInstanceID, InstanceTransform);

One last dirty detail…

That’s about it for the code side of things.

One thing I didn’t mention earlier: In Houdini, I’m using the placement of the plants to generate out the dirt map mask so I can blend in details around their roots:


So when I export out my ground plane, I am putting the Puddles mask into the blue channel of the vertex colours, and the Dirt mask into the red channel of the vertex mask🙂

Still to come (for vegetation)

So I need to:

  • Make the actual flowers I want
  • Make the roots/dirt/mossy texture that gets blended in under the plants
  • Build more stuff

Why.. O.o

Why not data tables

I’m all about XML.

But a sensible, less code-y way to do this would be to save all your instance data from Houdini into CSV format, bring it in to UE4 as a data table, then use a Construction Script in a blueprint to iterate over the data and add instances to an Instanced Static Mesh.

I like XML as a data format, so I decided it would be more fun to use XML.

Why not Houdini Engine

That’s a good question…

In short:

  • I want to explore similar workflows with Modo replicators at some point, and I should be able to re-use the c++/Python stuff for that
  • Who knows what other DCC tools I’ll want to export instances out of
  • It’s nice to jump into code every now and then. Keeps me honest.
  • I don’t own it currently, and I’ve spent my software budget on Houdini Indie and Modo 901 already :)

If you have any questions, feel free to dump them in the comments, I hurried through this one a little since it’s at a half way point without great results to show off yet!



Factory – pt 1

February 7, 2016

This blog post won’t mostly be about a factory, but if I title it this way, it might encourage me to finish something at home for a change😉

My wife had a great idea that I should re-make some of my older art assets, so I’m going to have a crack at this one, that I made for Heroes Over Europe, 8 years ago:


I was quite happy with this, back in the day. I’d had quite a lot of misses with texturing on that project. The jump from 32*32 texture sheets on a PS2 flight game to 512*512 texture sets was something that took a lot of adjusting to.

I was pretty happy with the amount of detail I managed to squeeze out of a single 512 set for this guy, although I had to do some fairly creative unwrapping to make it happen, so it wasn’t a very optimal asset for rendering!

The plan

I want to make a UE4 scene set at the base of a similar building.
The main technical goal is to learn to use Substance Painter better, and to finally get back to doing some environment art.

Paving the way in Houdini

First up, I wanted to have a go at making a tiling brick material in Substance Painter.
I’ve used it a bit on and off, in game jams, etc, but haven’t had much chance to dig into it.

Now… This is where a sensible artist would jump into a tool like ZBrush, and throw together a tiling high poly mesh.

But, in order to score decently on Technical Director Buzz Word Bingo, I needed to be able to say the word Procedural at least a dozen more times this week, so…


I made bricks #Procedurally in Houdini, huzzah!

I was originally planning to use Substance Designer, which I’ve been playing around with on and off since Splinter Cell: Blacklist, but I didn’t want to take the time to learn it properly right now. The next plan was Modo replicators (which are awesome), but I ran into a few issues with displacement.

Making bricks

Here is the network for making my brick variations, and I’ll explain a few of the less obvious bits of it:


It a little lame, but my brick is a subdivided block with some noise on it:


I didn’t want to wait for ages for every brick to have unique noise, so the “UniqueBrickCopy” node creates 8 unique IDs, which are passed into my Noise Attribute VOP, and used to offset the position for two of the noise nodes I’m using on vertex position, as you can see bottom left here:


So that the repetition isn’t obvious, I randomly flip the Y and Z of the brick, so even if you get the same brick twice in a row, there’s less chance of a repeat (that’s what the random_y_180 and random_z_180 nodes are at the start of this section).

Under those flipping nodes, there are some other nodes for random rotations, scale and transform to give some variation.


Each position in my larger tiling pattern has a unique ID, so that I can apply the same ID to two different brick placements, and know that I’m going to have the exact same brick (to make sure it tiles when I bake it out).

You can see the unique IDs as the random colours in the first shot of the bricks back up near the top.

You might notice (if you squint) that the top two and bottom two rows, and left two and 2nd and 3rd from the right rows have matching random colours.

Placing the bricks in a pattern

There was a fair bit of manual back and forth to get this working, so it’s not very re-usable, but I created two offset grids, copied a brick onto each point of the grid, and played around with brick scale and grid offsets until the pattern worked.


So each grid creates an “orientation” attribute, which is what rotates the bricks for the alternating rows. I merge the points together, sort them along the X and Y axis (so the the vertex numbers go up across rows).

Now, the only interesting bit here is creating the unique instance ID I mentioned before.
Since I’ve sorted the vertices, I set the ID to be the vertex ID, but I want to make sure that the last two columns and the last two rows match with the first and last columns and rows.

This is where the two wrangle nodes come in: they just check if the vertex is in the last two columns, and if it is, set the ID to be back at the start of the row.

So then we have this (sorry, bit hard to read, but pretend that the point IDs on the right match those on the left):


And yes, in case you are wondering, this is a lot of effort for something that could be easier done in ZBrush.
I’m not in the habit of forcing things down slow procedural paths when there is no benefit in doing so, but in this case: kittens!
(I’ve got to break my own rules sometimes for the sake of fun at home :))

Painter time

Great, all of that ugly #Procedural(tm) stuff out of the way, now on to Substance Painter!


So I’ve brought in the high poly from Houdini, and baked it out onto a mesh, and this is my starting point.
I’m not going to break down everything I’ve done in Substance, but here are the layers:


All of the layers are #Procedural(tm), using the inbuilt masks and generators in Painter, which use the curvature, ambient occlusion and thickness maps that Painter generates from your high poly mesh.

The only layer that had any manual input was the black patches, because I manually picked a bunch of IDs from my Houdini ID texture bake, to get a nice distribution:


The only reason I picked so many manually is that Painter seems to have some issues with edge pixels in a Surface ID map, so I had to try and not pick edge bricks.
Otherwise, I could have picked a lot less, and ramped the tolerance up more.

You might notice that the material is a little dark. I still haven’t nailed getting my UE4 lighting setup to match with Substance, so that’s something I need to work on.
Luckily, it’s pretty easy to go back and lighten it up without losing any quality🙂

Testing in UE4


Pretty happy with that, should look ok with some mesh variation, concrete skirting, etc!
I’ll still need to spend more time balancing brightness, etc.

For giggles, I brought in my wet material shader from this scene:


Not sure if I’ll be having a wet scene or not yet, but it does add some variation, so I might keep it🙂

Oh, and in case you were wondering how I generated the vertex colour mask for the water puddles… #Procedural(tm)!


Exported out of Houdini, a bunch of noise, etc. You get the idea🙂

Next up

Think I’ll do some vegetation scattering on the puddle plane in Houdini, bake out the distribution to vertex colours, and use it to drive some material stuff in UE4 (moss/dirt under the plants, etc).

And probably export the plants out as a few different unique models, and their positions to something that UE4 can read.

That’s the current plan, anyway🙂


Shopping for masks in Houdini

January 20, 2016

Houdini pun there, don’t worry if you don’t get it, because it’s pretty much the worst…

In my last post, I talked about the masking effects in Shangri-La, Far Cry 4.

I mentioned that it would be interesting to try out generating the rough masks in Houdini, instead of painting them in Modo.

So here’s an example of a mask made in Houdini, being used in Unreal 4:


Not horrible.
Since it moves along the model pretty evenly, you can see that the hands are pretty late to dissolve, which is a bit weird.

I could paint those out, but then the more I paint, the less value I’m getting out of Houdini for the process.

This is probably a good enough starting point before World Machine, so I’ll talk about the setup.

Masky mask and the function bunch

I’ve exported the Vortigaunt out of Modo as an Alembic file, and bring it into Houdini.
Everything is pretty much done inside a single geometry node:


The interesting bit here is “point_spread_solver”. This is where all the work happens.

Each frame, the solver carries data from one vertex to another, and I just manually stop and bake out the texture when the values stop spreading.

I made the un-calculated points green to illustrate:


A note on “colour_selected_white”, I should really do this bit procedurally. I’m always starting the effect from holes in the mesh, so I could pick the edge vertices that way, instead of manually selecting them in the viewport.

The solver


Yay. Attribwrangle1. Such naming, wow.

Nodes are fun, right up until they aren’t, so you’ll often see me do large slabs of functionality in VEX. Sorry about that, but life is pain, and all that…

Here’s what the attrib wrangle is doing:

int MinDist = -1;

if (@DistanceFromMask == 0)
	int PointVertices[];
	PointVertices = neighbours(0, @ptnum);

	foreach (int NeighborPointNum; PointVertices)
		int success             = 0;
		int NeighborDistance    = pointattrib(

		if (NeighborDistance > 0)
			if (MinDist == -1)
				MinDist = NeighborDistance;

			MinDist = min(MinDist, NeighborDistance);

if (MinDist > 0)
	@DistanceFromMask = (MinDist + 1);

Not a very nuanced way of spreading out the values.

For each point, assuming the point has a zero “distance” value, I check the neighboring points.
If a neighbor has a non-zero integer “distance” value, then I take the lowest of all the neighbors, add one to it, and that becomes my “distance” value.

This causes the numbers to spread out over the surface, with the lowest value at the source points, highest value at the furthest distance.

Integers –> Colours

So, the vertices now all have integer distance values on them.
Back up in the mask image, the solver promotes the Distance value up to a Detail attribute, getting the Max Distance of all the points.

In the wrangle node under that, I just loop through all the points and divide each point’s Distance by the Max Distance, and use that to set the colour, or I set it as green if there’s no distance value:

if (@DistanceFromMask > 0)
    @Cd = float(@DistanceFromMask - 1) / float(@DistanceFromMaskMax);
    @Cd = {0,1,0};

So that produces the gif I showed earlier with the green on it.

Colours –> Textures

Time to jump into SHOPS. See? This is where my awesome title pun comes in.

As simple as it gets, vertex Colour data straight into the surface output:


In my “Out”, I’m using a BakeTexture node to bake the material into a texture, and I end up with this:



Bam! Work is done.
Still wouldn’t have been much point in doing this on Shangri-La, because painting masks in Modo is super quick anyway, but it’s fun to jump back into Houdini every now and then and try new things.

Has led to some other interesting thoughts, though.

  • For Shangri-La, we could have done that at runtime in a compute shader, and generated the mask-out effect from wherever you actually shot an arrow into an enemy.
    That would have been cool.
  • You could probably use Houdini Engine to put the network into UE4 itself, so you could paint the vertex colours and generate the masks all inside UE4.
  • You could do the “erosion” part in Houdini as well, even if you just subdivide the model up and do it using points rather than run it in image space (to avoid seams). Might be hard to get a great resolution out of it.
  • You could do an actual pressure simulation, something along the lines what this Ben Millwood guy did here. He’s a buddy of mine, and it’s a cool approach, and it’s better than my hacky min values thing.

Crumbling tiger, hidden canyon

January 9, 2016

In the world of Shangri-La, Far Cry 4, lots of things crumble and dissolve into powder.

For example, my buddy Tiges here:


Or, as Max Scoville hilariously put it, the tiger turns into cocaine… NSFW I guess… That gave me a huge chuckle when I first saw it.

I was not responsible for the lovely powder effects in the game, we had a crack team (see what I did there) of Tricia Penman, Craig Alguire and John Lee for all that fancy stuff.
The VFX were such a huge part of the visuals Shangri-La, the team did an incredible job.

What I needed to work out was a decent enough way of getting the tiger body to dissolve away.
Nothing too fancy, since it happens very quickly for the most part.


I threw together a quick prototype in Unity3d using some of the included library content from Modo:


I’m just using a painted greyscale mask as the alpha, then thresholding through it (like using the Threshold adjustment layer in Photoshop, basically).

There’s a falloff around the edge of the alpha, and I’m applying a scrolling tiled firey texture in that area.

I won’t go into it too much, as it’s a technique as old as time itself, and there are lots of great tutorials out there on how to set it up in Unity / UE4, etc.

As it turns out, there was already some poster burning tech that I could use, and it worked almost exactly the same way, so I didn’t need to do the shader work in the end:

You mentioned canyons?

I actually used World Machine to create the detail in the maps.
In the end, I needed to make about 30 greyscale maps for the dissolving effects on various assets.


I’ll use my Vortigaunt fellow as an example, since I’ve been slack at using him for anything else or finishing him (typical!).

First up, for most of the assets, I painted a very rough greyscale mask in Modo:


Then, I take that into World Machine, and use it as a height map.
And run erosion on it:


I then take the flow map out of World Machine, and back into Photoshop.
Overlay the flow map on top of the original rough greyscale mask, add a bit of noise to it.
With a quick setup in UE4, I have something like this:


Sure, doesn’t look amazing, but for ten minutes work it is what it is🙂

You could spend more time painting the mask on some of them (which I did for the more important ones), but in the end, you only see it for a few dozen frames, so many of them I left exactly how they are.

Better-er, more automated, etc

Now that I have the Houdini bug, I would probably generate the rough mask in Houdini rather than painting it.


  • Set the colour for the vertices I want the fade to start at
  • Use a solver to spread the values out from these vertices each frame (or do it in texture space, maybe).
  • Give the spread some variation based off the roughness and normals of the surface (maybe).
  • Maybe do the “erosion” stuff in Houdini as well, since it doesn’t really need to be erosion, just some arbitrary added stringy detail.

Again, though, not worth spending too much time on it for such a simple effect.
A better thing to explore would be trying to fill the interior of the objects with some sort of volumetric effect, or some such🙂
(Which is usually where I’d go talk to a graphics programmer)

Other Examples

I ended up doing this for almost all of the characters, which exception of a few specific ones (SPOILERS), like the giant chicken that you fight.
That one, and a few others, were handled by Nils Meyer and Steve Fabok, from memory.

So aside from those, and my mate Tiges up there, here’s a few other examples.

Bell Chains


Hard to see, but the chain links fade out 1 by 1, starting from the bottom.

This was tricky, because the particular material we were using didn’t support 2 UV channels, and the chain links are all mapped to the same texture space (which makes total sense).

Luckily, the material *did* support changing UV tiling for the Mask vs the other textures.

So we could stack all of the UV shells of the links on top of each other in UV space, like so:


So then the mask fades from 0 –> 1 in V.
In the material, if we had 15 links, then we need to tile V 15 times for Diffuse, Normal, Roughness, etc, leaving the mask texture tiled once.

Edwin Chan was working on the assets for these, and I could have just made him manually set that up in Max, but it would have been a bit of a pain, and I’d already asked him to do all sorts of annoying setup on the prayer wheels…

There were 3-4 different bell chain setups, and each of those had multiple LODs for each platform, so I wrote a Maxscript that would pack all the UVs into the correct range.

Quite a lot of work for such a quick effect, but originally the timing was a lot slower, so at that point it was worth it🙂

Bow gems


Although I never really got this as on-concept as I would have liked, I’m pretty happy with how these turned out.

Amusingly, the emissive material didn’t support the alpha thresh-holding effect.

So there are two layers of mesh: the glowy one and the non-glowy one.
It’s actually the non-glowy layer that fades out!
The glowy stuff is always there, slightly smaller, hidden below the surface.

Dodgy, but got the job done😛

Dead Space fan art: Necrotle

October 31, 2015


Right in time for Halloween, meet a Necrotle!
That’s a Dead Space Necromorph turtle, fyi.

I started on this guy about 5 years ago, while I was working at Visceral Games in Melbourne. I wasn’t on the Dead Space project(s), I just felt like doing some fan art, and decided to come up with the most silly idea for an animal Necromorph I could think of (a giraffe was also in the plans, at one point…) :)

As with many of my home projects, I got sick of it and shelved it for a while. Decided a few weeks ago to resurrect the little fella!
And now I’m sick of looking at it again, and I’m calling it done😉

Started with a very basic sculpt in 3dcoat, then modeling, additional sculpting, texturing, rendering in Modo.

Rohan Dalvi + Shangri-La themed procedural islands

September 5, 2015


Over the last month or two, I’ve been working through a fantastic set of Houdini tutorials by Rohan Dalvi:

Rohan Dalvi Tutorials

They reminded me quite a lot of the floating islands in FarCry 4 Shangri-La (which I was lucky enough to work on, which I briefly mentioned here), so for a bit of fun I went with a “Shangri-La” theme.

I changed up quite a few things along the way, but the result is not far removed from a palette swap on original tutorials.
So please, check out the tutorials if you want to know how it’s done, it’s a mostly procedural workflow🙂

More Shangri-La stuff

If you want to see some more of the concept for Shangri-La, you should check out Kay Huang’s portfolio here:

Here’s our art director Josh Cook chatting about how we ended up where we did, and the intention behind the art of Shangri-La:

Also, Jobye-Kyle Karmaker shared some insight into his level art process on the floating islands section of Shangri-La, and the reveal leading up to it:

Just a very small sample of the amazing art team we had on that project🙂