Modo 10 on the move

A month ago, I had a fun adventure taking a train across Canada (which I can highly recommend, by the way).

I’ve moved from Toronto to Vancouver, so I’ve been sans PC for a few months.

Never fear, though, I could still run Modo on my trusty Surface Pro 1 🙂


One of the stops along the way was in Winnipeg.
I had two tasks while there, getting some t-shirts, and finding something to model in Modo (well ok, three, if you include a milkshake at VJ’s).

I decided on this auto-sprinkler thing:


The plan was to do most of the modelling work with standard Pixar sub-d stuff in Modo 901 while on the train.

After I arrived in Vancouver, though, I upgraded to Modo 10, which gave me some fun new tools to play with!

Procedural Text

Non destructive modelling along the lines of Max’s stack, and/or Maya’s history is something that has been discussed a long time in Modo circles, and it has landed in Modo 10!

So, once the main mesh was finished, I could select an edge loop in the sub-d mesh, use Edges to Curves to create a curve to place the text around.

Then, in a new procedural text item, I reference in the curve, and use it with a Path generator and a path segment generator to wrap the text around the sprinkler base plate:


I couldn’t work out a procedural way to get those letters rotated correctly, so I just fixed that up manually afterwards.

Fusey fuse

Since I wanted the text to be extruded from the surface and to look like it is all one piece, I decided to use Modo’s Mesh Fusion to Boolean the text on:


Since the mesh was a sub-d mesh, I didn’t really need to make a low poly, I just used the cage.
Well… Technically I should probably still make a low poly (the cage is 3500 vertices, which is pretty heavy), but it’s amazing what you can get away with these days, and soon we will have edge weighted sub-d in all major engines anyway (we won’t… But if I say it enough times, maybe it will happen??):


At this point, I unwrapped the cage, to get the thing ready for texturing.

Substance Painter time

I won’t go too much into the process here, because my approach is generally along the lines of: stack up dozens of procedural layers, and mess about with numbers for a few hours…

Since I could not be bothered rendering out a Surface ID map from Modo, I quickly created some base folders with masks created from the UV Chunk Fill mode in the Polygon Fill tool.

So in about 10 minutes I had a base set of folders to work with, and some materials applied:


Hello weird bronze liney thing.
Looks like someone forgot to generate world space normal maps…

Anyway, I went with a fairly standard corroded bronze material for the main body, and tweaked it a little.
Then added a bunch more procedural layers, occasionally adding paint masks to break them up here and there when I didn’t feel like adding too many more procedural controls.

There’s about 30 layers all in all, some on pretty low opacity:


And here’s what I ended up with in Painter:


Pretty happy with that 🙂
Could do with some more saturation variation on the pink bits, and the dirt and wear is a bit heavy, but near enough is good enough!

Giant hover sprinkler of doom

And here it is in UE4, really large, and floating in the air, and with a lower resolution texture on it (because 2048 is super greedy :P):


Speaking of UE4: Modo 10 has materials that are compatible with the base materials in Unreal and Unity now, so you can have assets look almost identical between the two bits of software.

Which is pretty neat. I haven’t played with that feature, but I feel like it will be particularly nice for game artists who want to take Unreal assets into Modo, and render them out for folio pieces, etc.


Factory – pt 3 (early optimisation is something something)

Part 3 of

Optimizing art early, before you have a good sense of where the actual expense of rendering your scene is, can be a pretty bad idea.

So let’s do it!!


I’ll do it #procedurally.
Sort of.

20 gallons of evil per pixel

My ground shader is pretty expensive. It’s blending all sorts of things together, currently, and I still have things to add to it.

I don’t want to optimize the actual material yet, because it’s not done, but it looks like this and invokes shame:


As a side note here, this material network looks a bit like the Utah teapot, which is unintentionally awesome.

Every pixel on this material is calculating water and dirt blending.

But many of those pixels have no water or dirt on them:


So why pay the cost for all of that blending across the whole ground plane?
What can I do about it?

Probably use something like the built in UE4 terrain, you fool

Is probably what you were thinking.
I’m sure that UE4 does some nice optimization for areas of terrain that are using differing numbers of layers, etc.

So you’ve caught me out: The technique I’m going to show off here, I also want to use on the walls of my factory, I just haven’t built that content yet, and I thought the ground plane would be fun to test on 🙂

Back to basics

First up, I want to see exactly how much all of the fancy blending is costing.

So I made a version of the material that doesn’t do the water or the dirt, ran the level and profiled them side by side:


^ Simple version of my material vs the water and dirt blending one.


So, you can see above that the material that has no dirt/water blending is 1.6 milliseconds cheaper.

Now, if I can put that material on the areas that don’t need the blending, I can’t expect to get that full 1.6 milliseconds back, but I might get 1 millisecond back.

That might not sound like much, but for a 60 fps game, that’s about 1/16th of the entire scene time.

Every little bit helps, getting that time back from cutting content alone can take many hours 🙂

Splitting the mesh

To put my cheap material onto the non-blending sections, I’ll split the mesh around the areas where the vertex colour masks have a value of 0.

Luckily, the ground plane is subdivided quite highly so that it plays nice with UE4 tessellation and my vertex painting, so I don’t need to do anything fancy with the mesh.

Back to Houdini we go!


So, anything that has > 0 sum vertex colour is being lifted up in this shot, just to make it obvious where the mesh split is happening.

Here’s the network:


The new nodes start at “Attribcreate”, etc.

The basic flow is:

  • “Colour value max” is set as max(@Cd.r, @Cd.g), per point, so it will be set to some value if either dirt or water are present.
  • Two new Max and Min attributes per polygon are created by promoting Colour Value max from Point –> Polygon, using Min and Max promotion methods (so if one vertex in the polygon has some dirt/water on it, the then max value will be non zero, etc)
  • The polygons are divided into three groups: Polygons that have no vertices with any blending, Polygons that have some blending, Polygons that have all verts that are 100% blending.
  • NOTE: For the purposes of this blog post, all I really care about is if the Polygon has no dirt/water or if it has some, but having the three groups described above will come in handy in a later blog post, you’ll just have to trust me 🙂
  • The two groups of polygons I care about get two different materials applied to them in Houdini.
    When I export them to UE4, they maintain the split, and I can apply my cheaper material.

So, re-exported, here it is:

Looks the same?

Great, mission successful! Or is it…

Checking the numbers

Back to the GPU profiler!


Ok, so the column on the right is with my two materials, the column in the middle is using the expensive material across the whole ground plane.

So my saving was a bit under one millisecond in this case.
For an hour or two of work that I can re-use in lots of places, I’m willing to call that a success 🙂

Getting more back

Before cleaning up my shader, there’s a few more areas I can/might expand this, and some notes on where I expect to get more savings:

  • I’ll have smaller blending areas on my final ground plane (less water and dirt) and also on my walls. So the savings will be higher.
  • I might mask out displacement using vertex colours, so that I’m not paying for displacement across all of my ground plane and walls.
    The walls for flat sections not on the corner of the building and/or more than a few metres from the ground can go without displacement, probably.
  • The centre of the water puddles is all water: I can create a third material that just does the water stuff, and split the mesh an extra time.
    This means that the blending part of the material will be just the edges of the puddles, saving quite a lot more.

So all in all, I expect I can claw back a few more milliseconds in some cases in the final scene.

One final note, the ground plane is now three draw calls instead of one.
And I don’t care.
So there. 🙂







Factory – pt 1

This blog post won’t mostly be about a factory, but if I title it this way, it might encourage me to finish something at home for a change 😉

My wife had a great idea that I should re-make some of my older art assets, so I’m going to have a crack at this one, that I made for Heroes Over Europe, 8 years ago:


I was quite happy with this, back in the day. I’d had quite a lot of misses with texturing on that project. The jump from 32*32 texture sheets on a PS2 flight game to 512*512 texture sets was something that took a lot of adjusting to.

I was pretty happy with the amount of detail I managed to squeeze out of a single 512 set for this guy, although I had to do some fairly creative unwrapping to make it happen, so it wasn’t a very optimal asset for rendering!

The plan

I want to make a UE4 scene set at the base of a similar building.
The main technical goal is to learn to use Substance Painter better, and to finally get back to doing some environment art.

Paving the way in Houdini

First up, I wanted to have a go at making a tiling brick material in Substance Painter.
I’ve used it a bit on and off, in game jams, etc, but haven’t had much chance to dig into it.

Now… This is where a sensible artist would jump into a tool like ZBrush, and throw together a tiling high poly mesh.

But, in order to score decently on Technical Director Buzz Word Bingo, I needed to be able to say the word Procedural at least a dozen more times this week, so…


I made bricks #Procedurally in Houdini, huzzah!

I was originally planning to use Substance Designer, which I’ve been playing around with on and off since Splinter Cell: Blacklist, but I didn’t want to take the time to learn it properly right now. The next plan was Modo replicators (which are awesome), but I ran into a few issues with displacement.

Making bricks

Here is the network for making my brick variations, and I’ll explain a few of the less obvious bits of it:


It a little lame, but my brick is a subdivided block with some noise on it:


I didn’t want to wait for ages for every brick to have unique noise, so the “UniqueBrickCopy” node creates 8 unique IDs, which are passed into my Noise Attribute VOP, and used to offset the position for two of the noise nodes I’m using on vertex position, as you can see bottom left here:


So that the repetition isn’t obvious, I randomly flip the Y and Z of the brick, so even if you get the same brick twice in a row, there’s less chance of a repeat (that’s what the random_y_180 and random_z_180 nodes are at the start of this section).

Under those flipping nodes, there are some other nodes for random rotations, scale and transform to give some variation.


Each position in my larger tiling pattern has a unique ID, so that I can apply the same ID to two different brick placements, and know that I’m going to have the exact same brick (to make sure it tiles when I bake it out).

You can see the unique IDs as the random colours in the first shot of the bricks back up near the top.

You might notice (if you squint) that the top two and bottom two rows, and left two and 2nd and 3rd from the right rows have matching random colours.

Placing the bricks in a pattern

There was a fair bit of manual back and forth to get this working, so it’s not very re-usable, but I created two offset grids, copied a brick onto each point of the grid, and played around with brick scale and grid offsets until the pattern worked.


So each grid creates an “orientation” attribute, which is what rotates the bricks for the alternating rows. I merge the points together, sort them along the X and Y axis (so the the vertex numbers go up across rows).

Now, the only interesting bit here is creating the unique instance ID I mentioned before.
Since I’ve sorted the vertices, I set the ID to be the vertex ID, but I want to make sure that the last two columns and the last two rows match with the first and last columns and rows.

This is where the two wrangle nodes come in: they just check if the vertex is in the last two columns, and if it is, set the ID to be back at the start of the row.

So then we have this (sorry, bit hard to read, but pretend that the point IDs on the right match those on the left):


And yes, in case you are wondering, this is a lot of effort for something that could be easier done in ZBrush.
I’m not in the habit of forcing things down slow procedural paths when there is no benefit in doing so, but in this case: kittens!
(I’ve got to break my own rules sometimes for the sake of fun at home :))

Painter time

Great, all of that ugly #Procedural(tm) stuff out of the way, now on to Substance Painter!


So I’ve brought in the high poly from Houdini, and baked it out onto a mesh, and this is my starting point.
I’m not going to break down everything I’ve done in Substance, but here are the layers:


All of the layers are #Procedural(tm), using the inbuilt masks and generators in Painter, which use the curvature, ambient occlusion and thickness maps that Painter generates from your high poly mesh.

The only layer that had any manual input was the black patches, because I manually picked a bunch of IDs from my Houdini ID texture bake, to get a nice distribution:


The only reason I picked so many manually is that Painter seems to have some issues with edge pixels in a Surface ID map, so I had to try and not pick edge bricks.
Otherwise, I could have picked a lot less, and ramped the tolerance up more.

You might notice that the material is a little dark. I still haven’t nailed getting my UE4 lighting setup to match with Substance, so that’s something I need to work on.
Luckily, it’s pretty easy to go back and lighten it up without losing any quality 🙂

Testing in UE4


Pretty happy with that, should look ok with some mesh variation, concrete skirting, etc!
I’ll still need to spend more time balancing brightness, etc.

For giggles, I brought in my wet material shader from this scene:


Not sure if I’ll be having a wet scene or not yet, but it does add some variation, so I might keep it 🙂

Oh, and in case you were wondering how I generated the vertex colour mask for the water puddles… #Procedural(tm)!


Exported out of Houdini, a bunch of noise, etc. You get the idea 🙂

Next up

Think I’ll do some vegetation scattering on the puddle plane in Houdini, bake out the distribution to vertex colours, and use it to drive some material stuff in UE4 (moss/dirt under the plants, etc).

And probably export the plants out as a few different unique models, and their positions to something that UE4 can read.

That’s the current plan, anyway 🙂


Rising damp

Wet ground

Complicated materials are a per-pixel, per frame cost, so it’s not always easy to justify making them.

For example, my tiling vertex-painted metal materials:
This could easily have just been a uniquely unwrapped 2048 tiled texture set, with the detail painted where I want it. The way I set it up allows for memory savings, and re-usability, but sacrifices performance for that.

The best justification for complex materials is for surfaces that change / react to the environment and/or gameplay, because then you don’t really have a choice but to make an expensive material 🙂

I’ve been planning for a while to make some good examples of more necessary complex materials. I started out on a desert scene with swirling sands, but that didn’t really end up where I wanted it, so I’ve gone with a slightly more standard “wet ground” effect:

I won’t go into detail on the Blueprint setup, it just increases / decreases the water level as the player stands in front of the floating red valve. There is a volume around it, and as you move in and out of the volume, the direction of water flow changes.

The water height value feeds into this material:
It’s messy, but can be summarized roughly:

  • Two layers of materials for the ground texture (I’m using a muddy/rocky material and a grass material), each with:
    • Normals
    • Height
    • Albedo
    • Roughness
    • Ambient Occlusion
  • Vertex colours for blending between the two layers
  • Vertex colour used for keeping some areas dryer than others, to break up the effect
  • A normal map and colour for the water surface (blended in by depth)

I know it’s not an entirely useful image, but here is the material:


All of the more important bits which are in material functions a bit further down. If anyone wants to see the whole network, I’ll take the time to screenshot it properly, and stitch it all together 🙂

Here’s a view showing some of the areas broken up with vertex painting (the two material layers, some dry areas, etc):


The two layer blending is mostly identical in setup to the checker plate material I mentioned at the top.
The only difference being the height values, which get blended together and this resulting height is the main input for the water surface (as well as for breaking up the blending, so that tall rocks stick out a bit from the vertex blend).

Modo the things until they are Modo’d

I decided to make all of the textures procedurally.
A smarter man probably would have used the Substance tools, or ZBrush, or Houdini. Or photo source for that matter.

I’m not that guy, I’m Modo guy!! 😛


So I used two layers of fur to make the grass, along with a bunch of procedural noise layers.
Nothing too exciting, but here’s the shader network:


The muddy rocky stuff was a rock in a replicator and a whole bunch of “flow bozo” displacement maps, because they are awesome:


So yeah, I’m not going to win any amazing texture art of the year awards.
I could spend a bit more time on them, or use them as a paint-over base, but I’ve got pretty lazy with this side project now so I didn’t even bother fixing seams 🙂

But, at least it gives me pretty accurate height data to play with, which is important for water-ness!

Water Level

The current water level is passed into the material, and is used to threshold the height map, to work out where the water is.
It might help to visualise this in Photoshop. So let’s pretend we have a height map that has a bunch of dents in it, and the dents will fill up first:


If you put a folder above it with a Threshold adjustment layer (and an invert), you can drag the threshold around to see the water level rise (black is no water, white is water):


This is essentially how I’m controlling the water level in the material, but I’m not clamping the values to give a hard edge. I’ve moved this into a function, to clean up the main material graph a little:


As a side note, the first shader of mine I saw running in a game was a threshold effect like this to make oil run down the side of a plane.
It was for Heroes Over Europe, and was on one of the programmers’ machines, and was ousted for a better approach almost immediately. I was very grateful that she got it in game for me, up until that point I’d just been throwing shaders at 3dsMax. It set me on the path for doing quite a bit of shader work for the next few years 🙂

You’ll notice I’m putting out two return values here: Mask and Depth. The Depth is very similar to the mask, but does not use the falloff value, so it essentially “how far is this pixel from the current water level”. I use this Depth value to tint the water with a bias, so that I can have muddy puddles that are a clearer where they are shallow.
It’s pretty subtle, so it may be an unnecessary complication, but here’s an example making it a little more obvious:


The water also has a sine wave running over the height, just to give it a little bit of ebb and flow.


Right, so, with the water height determined, I can then use the depth of the water for a fake refraction effect.
This is usually where I’d pull out the BumpOffset node, but it uses height maps, and I had a Normal map handy for the water surface.
I made a simple normal based parallax function, just because I’ve had good results with this for various materials (including the UI) on Ashes Cricket 2009, and various previous attempts at rivers and water effects in other games.

Although I’m only using a single transparent layer, my go-to paper has always been “Rendering Gooey Materials with Multiple Layers” by Chris Oat, from Siggraph 2006, just because it has a really nice clear example for parallax offset.

So here’s my parallax offset function:


Please pretend that “Vector_Reflect” is a “CustomReflectionVector” node, btw.

I rolled my own vector reflection node because I didn’t notice CustomReflectionVector… I think I saw that it had an input of CameraVector, and that threw me off, and I’ve only just realised while writing this blog post…

So the parallax function outputs distorted UVs, and these UVs are used to look up the colour textures for the grass and mud. The water normal is just scrolling in one direction, but that seems to give a good enough distortion effect.

Taking it further

So, it was fun to work on for a few days, but there’s plenty of things to do to expand/improve on it visually (including getting a real artist to make the textures :P):

  • Use UE4’s flow maps to make the water flow around objects.
  • Use the back buffer (or last frame) as an input to the shader for refraction. This would be necessary if you wanted to have things sink into the water a little, and be refracted.
  • Get lighting to work above and below water (lighting is done based off the water surface normal, currently). This might be fixed by the previous improvement, if I can render and light the below water layer, then render the water surface using forward rendering and distort the already lit stuff below.
    Multiple lit layers are always a bit of a pain in deferred rendering.
  • It would be really cool if I could have a dynamic texture that I could render height values into, and multiply it on top of the height in the material.
    That way, I could create dynamic ripples, splashes, impact effects, etc!
    Not really sure how I’d go about that in UE4, but it would be neat.

I’m not going to do any of those things, however, because I need to stop getting distracted and get back to my Half Life scene.
Hopefully more on that soon, but this has been a nice side-track 🙂

Flickery lights

It’s been a while since I’ve worked on this scene, so I thought I’d ease back in by playing around with materials again!

When the big machine in the roof of my scene turns on, I wanted to turn on a bunch of lights.
Here it is!

I’m doing this with a mix of actual lights, and an animated dynamic material.

Each piece of the geometry that is lit has vertex colour data on it, and the vertex colour is used to offset the flickering (which I’ll explain a bit later).

Here’s what the vertex colours look like (I’m only using the red channel at the moment, but I might use green and blue for other data later when I want to damage the lights):


And here’s the material:


The stuff outside of the “Flicker” group is just regular Normals, Roughness, Metalicity, etc.
I’m not using a separate emissive texture, I’m just masking the parts of the Albedo that are glowy, and using those in the material emissive slot. The mask is stored in the Albedo alpha channel.

Now, for the flickering part…

I’m using a texture to determine the brightness of the flicker, each pixel representing a different light intensity at a different point in time (I stretched the image to make it obvious, but it’s actually a 256 * 1 texture):


The vertex colour, that I mentioned before, is the offset into the above texture. So each light has a different offset into the flicker pattern, so they all go through the same cycle, but they start at different points in the texture.

There are parameters for the strength of the flicker, minimum and maximum emissive strength, etc.
These parameters are controlled by a dynamic material instance, so that I can play with their values in the RoofMachine blueprint, like so:


And, finally, I just set up some curves to control these parameters over the “power up” phase, which I just played around with until I was reasonably happy:


And that’s about it!

I’ve also done a little tweaking of the lighting, etc, and although it’s a bit too gloomy at the moment, it hides my nasty wall material that currently looks like cheese 🙂


Fix it in post

This weekend’s UE4 challenge was to start working on my “teleport Dr Freeman to Xen” effect.

I wanted to keep it much in the same vein as the original, so it’s a flashy greeny glowy thing 🙂


I’ll probably need to do more work on it at some point. It’s pretty weighty, sitting at 94 shader instructions and 4 texture samples (including the colour buffer). Having just screenshot the thing, there’s at least a few places with redundant nodes to clean up 🙂

I can probably also remove the colour buffer and a bunch of instructions, I had planned to run distortion over it, but the effect is pretty quick, so I’m not sure it really needs it.

Here’s a screenshot of the Blueprint (I know, the resolution isn’t great):

UE4 teleport post fx
UE4 teleport post fx

A quick breakdown:

  • “Distance from Centre” group gets the pixel’s UV position relative to the middle of the screen
  • “Ring bounds” these are the two dynamic properties that control the width of the ring that expands out from the centre.
  • “Is Pixel In Ring” is more or less like a big switch statement saying “if we’re outside the ring, do this… In the ring part, do this… In the centre of the ring, do this… Etc. I don’t use branches though, I just generate out mask values and use those in a big stack of lerps 🙂
  • “Overlay Texture Value” this is where most of the work happens. I use the “ScreenAlignedPixelToPixelUVs” to sample two textures with the correct aspect ratio, then I create two copies of the UV values rotating in different directions (from the centre of the screen). I have a noisy web-like texture that gets masked by the Ring part generated in the last group, and a more general noisey texture that looks a bit like frosted glass, which blends in from the centre of the ring. At this point, we’re talking greyscale images for all these things, btw, which leads into…
  • “Mix masks…” in this group, I mix the background (colour buffer) with a black circle that expands out of the centre of the screen. I think I could probably just use alpha to blend to the background instead, since I’m not taking advantage of distorting the background (like I mentioned before). The other thing that happens in this group is I add colour to my textures, using a 256 * 1 lookup texture, so I can move the values between various shades of green before going to white. Pretty much a “Gradient Map” from Photoshop 🙂

    And that’s about it! There’s some other bits and pieces in there, but nothing too exciting.
    If you’re curious about the textures, I made them in World Machine 🙂 I will probably change them quite a bit over the course of the project.

    This was a good fun crash course in my first UE4 material (with dynamic properties), first full screen effect, and post-processing volumes, etc.
    I’ll probably to another post process effect when the good doctor recovers on Xen, but we’ll see!

    Previous related updates:

    Not a leg to stand on
    Bevels and Blueprints
    Chamber update #2
    Resonance cascade