Posts Tagged ‘Houdini’

Subsurface Scattering spherical harmonics – pt 2

March 22, 2017

 

This is my 2nd blog post on using spherical harmonics for depth based lighting effects in Unreal 4.

The first blog post focused on generating the spherical harmonics data in Houdini, this post focuses on the Unreal 4 side of things.

I’m going to avoid posting much code here, but I will try to provide enough information to be useful if you choose to do similar things.

SH data to base pass

The goal was to look up the depth of the object from each light in my scene, and see if I could do something neat with it.

In UE4 deferred rendering, that means that I need to pass my 16 coefficients from the material editor –> base pass pixel shader -> the lighting pass.

First up, I read the first two SH coefficients out of the red and green vertex colour channels, and the rest out of my UV sets (remembering that I kept the default UV set 0 for actual UVs):

SHBaseMatUVs

Vertex colour complications

You notice a nice little hardcoded multiplier up there… This was one of the annoyances with using vertex colours: I needed to scale the value of the coefficients in Houdini to 0-1, because vertex colours are 0-1.

This is different to the normalization part I mentioned in the last blog post, which was scaling the depth values before encoding them in SH. Here, I’m scaling the actual computed coefficients. I only need to do this with the vertex colours, not the UV data, since UVs aren’t restricted to 0-1.

The 4.6 was just a value that worked, using my amazing scientific approach of “calculate SH values for half a dozen models of 1 000 – 10 000 vertices, find out how high and low the final sh values go, divide through by that number +0.1”. You’d be smarter to use actual math to find the maximum range for coefficients for normalized data sets, though… It’s probably something awesome like 0 –> 1.5 pi.

Material input pins

Anyway, those values just plug into the SH Depth Coeff pins, and we’re done!!

Unreal 4 SH depth material

Ok.
That was a lie.
Those pins don’t exist usually… And neither does this shading model:

SHDepthShadingModel

So, that brings me to…

C++ / shader side note

To work out how to add a shading model, I searched the source code for a different shading model (hair I think), and copied and pasted just about everything, and then went through a process of elimination until things worked.
I took very much the same approach to the shader side of things.

This is why I’m a Tech Artist, and not a programmer… Well, one of many reasons 😉
Seriously though, being able to do this is one of the really nice things about having access to engine source code!

The programming side of this project was a bunch of very simple changes across a wide range of engine source files, so I’m not going to post much of it:

P4Lose

There is an awful lot of this code that really should be data instead. But Epic gave me an awesome engine and lets me mess around with source code, so I’m not going to complain too much 😛

Material pins (continued…)

So I added material inputs for the coefficients, plus some absorption parameters.

Sh coeffs

The SH Coeffs material pins are new ones, so I had to make a bunch of changes to material engine source files to make that happen.
Be careful when doing this: Consistent ordering of variables matters in many of these files. I found that out the easy way: Epic put comments in the code about it 🙂

Each of the SH coeffs material inputs is a vector with 4 components, so I need 4 of these to send my 16 coefficients through to the base pass.

Custom data (absorption)

The absorption pins you might have noticed from my material screenshot are passed as “custom data”.
Some of the existing lighting models (subsurface, etc) pass additional data to the base pass (and also through to lighting, but more on that later).

These “custom data” pins can be renamed for different shading models. So you can use these if you’d rather not go crazy adding new pins, and you’re happy with passing through just two extra float values.
Have a look at MaterialGraph.cpp, and GetCustomDataPinName if that sounds like a fun time 🙂

Base pass to lighting

At this point, I’d modified enough code that I could start reading and using my SH values in the base pass.

A good method for testing if the data was valid was using the camera vector to look up the SH depth values. I knew things were working when I got similar results to what I was seeing in Houdini when using the same approach:

BasePassDebug

That’s looking at “Base Color” in the buffer visualizations.

I don’t actually want to do anything with the SH data in the base pass, though, so the next step is to pass the SH data through to the lighting pass.

Crowded Gbuffer

You can have a giant parameter party, and read all sorts of fun data in the base pass.
However, if you want to do per-light stuff, at some point you need to write all that data into a handful of full screen buffers that the lighting pass uses. By the time you get to lighting, you don’t have per object data, just those full screen buffers and your lights.

These gbuffers are lovingly named GBufferA, GBufferB, GBuffer… You get the picture.

You can visualize them in the editor by using the various buffer visualizers, or explicitly using the “vis” command, e.g: “vis gbuffera”:

visGbuffers

There are some other buffers being used (velocity, etc), but these are the ones I care about for now.

I need to pass an extra 16 float values through to lighting, so surely I could just add 4 new gbuffers?

Apparently not, the limit for simultaneous render targets is 8 🙂

I started out by creating 2 new render targets, so that covers half of my SH values, but what to do with the other 8 values?

Attempt 1 – Packing it up

To get this working, there were things that I could sacrifice from the above existing buffers to store my own data.

For example, I rarely use Specular these days, aside from occasionally setting it to a constant, so I could use that for one of my SH values, and just hard code Specular to 1 in my lighting pass.

With this in mind, I overwrote all the things I didn’t think I cared about for stylized translucent meshes:

  • Static lighting
  • Metallic
  • Specular
  • Distance field anything (I think)

Attempt 2 – Go wide!

This wasn’t really ideal. I wasn’t very happy about losing static lighting.

That was about when I realized that although I couldn’t add any more simultaneous render targets, I could change the format of them!

The standard g-buffers are 8 bits per channel, by default. By going 16 bit per channel, I could pack two SH values into each channel, and store all my SH data in my two new g-buffers without the need for overwriting other buffers!

Well, I actually went with PF_A32B32G32R32F, so 32 bits per channel because I’m greedy.

It’s probably worth passing out in horror at the cost of all this at this point: 2 * 128bit buffers is something like 250mb of data. I’m going to talk about this a little later 🙂

Debugging, again

I created a few different procedural test assets in Houdini with low complexity as test cases, including one which I deleted all but one polygon as a final step, so that I could very accurately debug the SH values 🙂

On top of that, I had a hard coded matrix in the shaders that I could use to check, component by component, that I was getting what I expected when passing data from the base pass to lighting, with packing/unpacking, etc:

const static float4x4 shDebugValues = 
{
	0.1, 0.2, 0.3, 0.4,
	0.5, 0.6, 0.7, 0.8,
	0.9, 1.0, 1.1, 1.2,
	1.3, 1.4, 1.5, 1.6
};

It seems like an obvious and silly thing to point out, but it saved me some time 🙂

Here are some of my beautiful procedural test assets (one you might recognize from the video at the start of the post):

Houdini procedural test asset (rock thing)testobject3testobject2testobject1

“PB-nah”, the lazy guide to not getting the most out of my data

Ok, SH data is going through to the lighting pass now!

This is where a really clever graphics programmer could use if for some physically accurate lighting work, proper translucency, etc.

To be honest, I was pleasantly surprised that anything was working at this stage, so I threw in a very un-pbr scattering, and called it a day! 🙂

float3 SubsurfaceSHDepth( FGBufferData GBuffer, float3 L, float3 V, half3 N )
{
	float AbsorptionDistance 	= GBuffer.CustomData.x;
	float AbsorptionPower 		= lerp(4.0f, 16.0f, GBuffer.CustomData.y);

	float DepthFromPixelToLight 	= Get4BandSH(GBuffer.SHCoeffs, L);
	float absorptionClampedDepth 	= saturate(1.0f / AbsorptionDistance * DepthFromPixelToLight);
	float SSSWrap 			= 0.3f;
	float frontFaceFalloff 		= pow(saturate(dot(-N, L) + SSSWrap), 2);

	float Transmittance 		= pow(1 - absorptionClampedDepth, AbsorptionPower);

	Transmittance *= frontFaceFalloff;

	return Transmittance * GBuffer.BaseColor;
}
It’s non view dependent scattering, using the SH depth through the model towards the light, then dampened by the absorption distance.
The effect falls off by face angle away from the light, but I put a wrap factor on that because I like the way it looks.
For all the work I’ve put into this project, probably the least of it went into the actual lighting model, so I’m pretty likely to change that code quite a lot 🙂
What I like about this is that the scattering stays fairly consistent around the model from different angles:
GlowyBitFrontGlowyBitSide
So as horrible and inaccurate and not PBR as this is, it matches what I see in SSS renders in Modo a little better than what I get from standard UE4 SSS.

The End?

Broken things

  • I can’t rotate my translucent models at the moment 😛
  • Shadows don’t really interact with my model properly

I can hopefully solve both of these things fairly easily (store data in tangent space, look at shadowing in other SSS models in UE4), I just need to find the time.
I could actually rotate the SH data, but apparently that’s hundreds of instructions 🙂

Cost and performance

  • 8 uv channels
  • 2 * 128 bit buffers

Not really ideal from a memory point of view.

The obvious optimization here is to drop down to 3 band spherical harmonics.
The quality probably wouldn’t suffer, and that’s 9 coefficients rather than 16, so I could pack them into one of my 128 bit gbuffers instead of two (with one spare coefficient left over that I’d have to figure out).

That would help kill some UV channels, too.

Also, using 32 bit per channel (so 16 bits per sh coeff) is probably overkill. I could swap over to using a uint 16 bits per channel buffer, and pack two coefficients per channel at 8 bits each coeff, and that would halve the memory usage again.

As for performance, presumably evaluating 3 band spherical harmonics would be cheaper than 4 band. Well, especially because then I could swap to using the optimized UE4 functions that already exist for 3 band sh 🙂

Render… Differently?

To get away from needing extra buffers and having a constant overhead, I probably should have tried out the new Forward+ renderer:

https://docs.unrealengine.com/latest/INT/Engine/Performance/ForwardRenderer/

Since you have access to per object data, presumably passing around sh coefficients would also be less painful.
Rendering is not really my strong point, but my buddy Ben Millwood has been nagging me about Forward+ rendering for years (he’s writing his own renderer http://www.lived3d.com/).

There are other alternatives to deferred, or hybrid deferred approaches (like Doom 2016’s clustered forward, or Wolfgang Engels culled visibility buffers) that might have made this easier too.
I very much look forward to the impending not-entirely-deferred future 🙂

Conclusion

I learnt some things about Houdini and UE4, job done!

Not sure if I’ll keep working on this at all, but it might be fun to at least fix the bugs.

 

Subsurface Scattering spherical harmonics – pt 1

March 17, 2017

In this post, I’ll be presenting “SSSSH”, which will be the sound made by any real programmer who happens to accidentally read this…

This has been a side project of mine for the last month or so with a few goals:

  • Play around more with Houdini (I keep paying for it, I should use it more because it’s great)
  • Add more gbuffers to UE4, because that sounds like a useful thing to be able to do and understand.
  • Play around with spherical harmonics (as a black box) to understand the range and limitations of the technique a bit better.
  • Maybe accidentally make something that looks cool.

Spherical harmonics

I won’t go too much into the details on spherical harmonics because:
a) There’s lots of good sites out there explaining them and
b) I haven’t taken the time to understand the math, so I really don’t know how it works, and I’m sort of ok with that for now 😛

But at my basic understanding level, spherical harmonics is a way of representing data using a set of functions that take spherical coordinates as an input, and return a value. Instead of directly storing the data (lighting, depth, whatever), you work out a best fit of these functions to your data, and store the coefficients of the functions.

Here is a very accurate diagram:

DataSphere

You’re welcome!
Feel free to reuse that amazing diagram.

SH is good for data that varies rather smoothly, so tends to be used for ambient/bounced lighting in a lot of engines.

The function series is infinite, so you can decide how many terms you want to use, which determines how many coefficients you store.

For this blog post, I decided to go with 4-band spherical harmonics, because I’m greedy and irresponsible.
That’s 16 float values.

Houdini SH

Thanks to the great work of Matt Ebb, a great deal of work was already done for me:

http://mattebb.com/weblog/spherical-harmonics-in-vops/

I had to do a bit of fiddling to get things working in Houdini 15, but that was a good thing to do anyway, every bit of learning helps!

What I used from Matt were two nodes for reading and writing SH data given the Theta and Phi (polar and azimuthal) angles:

SHFunctions

Not only that, but I was able to take the evaluate code and adapt it to shader code in UE4, which saved me a bunch of time there too.

It’s not designed to be used that way, so I’m sure that it isn’t amazingly efficient. If I decide to actually keep any of this work, I’ll drop down to 3 band SH and use the provided UE4 functions 🙂

Depth tracing in Houdini

I’m not going to go through every part of the Houdini networks, just the meat of it, but here’s what the main network looks like:

NetworkOverview

So all the stuff on the left is for rendering SH coefficients out to textures (more on that later), the middle section is where the work is done, the right hand side a handful of debug modes visualizers, including some from the previously mentioned Matt Ebb post.

Hits and misses

I’m doing this in SOPs (geometry operations), because it’s what I know best in Houdini at the moment, as a Houdini noob 🙂
I should try moving it to shops (materials/per pixel) at some point, if that is at all possible.

To cheat, if I need more per-pixel like data, I usually just subdivide my meshes like crazy, and then just do geometry processing anyway 😛

The basic functionality is:

  • For each vertex in the source object:
    • Fire a ray in every direction
    • Collect every hit
    • Store the distance to the furthest away primitive that is facing away from the vertex normal (so back face, essentially)

All the hits are stored in an array, along with the Phi and Theta angles I mentioned before, here’s what that intersection network looks like currently:

IntersectAll

I’m also keeping track of the maximum hit length, which I will use later to normalize the depth data. The max length is tracked one level up from the getMaxIntersect network from the previous screenshot:

GenerateHits

This method currently doesn’t work very well with objects with lots of gaps in them, because the gaps in the middle of an object will essentially absorb light when they shouldn’t.
It wouldn’t be hard to fix, I just haven’t taken the time yet.

Normalizing

Before storing to SH values, I wanted to move all the depth values into the 0-1 range, since there are various other places where having 0-1 values makes my life easier later.

One interesting thing that came up here: when tracing rays out from a point, there are always more rays that miss than hit.

That’s because surfaces are more likely to be convex than concave, so at least half of the rays are pointing out into space:

FurryPlane

Realistically, I don’t really care about spherical data, I probably want to store hemispherical data around the inverse normal.
That might cause data problems in severely concave areas of the mesh, but I don’t think it would be too big a problem.
There are hemispherical basis functions that could be used for that, if I were a bit more math savvy:

A Novel Hemispherical Basis for Accurate and Efficient Rendering

Anyway, having lots of values shooting out to infinite (max hit length) was skewing all of the SH values, and I was losing a lot of accuracy, so I encoded misses as zero length data instead.

Debug fun times!

So now, in theory, I have a representation of object thickness for every vertex in my mesh!

One fun way to debug it (in Houdini) was to read the SH values using the camera forward vector, which basically should give me depth from the camera (like a z buffer):

SHDepth

And, in a different debug mode that Matt Ebb had in his work, each vertex gets a sphere copied onto it, and the sphere is displaced in every direction by the SH value on the corresponding vertex:

vortigauntBalloons

vortigauntBalloons2

This gives a good visual indicator on how deep the object is in every direction, and was super useful once I got used to what I was looking at 🙂

And, just for fun, here is shot from a point where I was doing something really wrong:

vortigauntClicker

Exporting the data

My plans for this were always to bake out the SH data into textures, partially just because I was curious what sort of variation I’d get out of it (I had planned to use displacement maps on the mesh in Houdini to vary the height).

SHImages
And yes, that’s 4 images worth of SH data, best imported as HDR.
But hey, I like being a bit over the top with my home projects…

One of my very clever workmates, James Sharpe, had the good suggestion of packing the coeffs into UV data as I was whining to him over lunch about the lack of multiple vertex color set support in UE4.
So I decided to run with UVs, and then move back to image based once I was sure everything was working 🙂

PixelVSVertex

Which worked great, and as you can probably see from the shot above, per-vertex (UVs or otherwise) is perfectly adequate 🙂

Actually, I ended up putting coefficients 1-14 into uvs, and the last two into the red and green vertex color channels, so that I could keep a proper UV set in the first channel that I could use for textures.

And then, all the work…

Next blog post coming soon!

In it, I will discuss all the UE4 work, the things I should have done, or done better, might do in the future and a few more test shots and scene from in UE4!

To be continued!!

The devil is in the decals

October 20, 2016

autodecal

Frequently when talking about mesh decals in UE4, I get comments about them being annoying to maintain, because every time you change your meshes you have to rebuild / adjust layers of decals.

Now, personally, I don’t really care that much, because my projects are all pretty small, and fixing up decals in Modo is generally a very quick job.

But it’s come up enough that I figured I’d make a “2 metres short of Minimum Viable Product” example of how you could address this.

Houe4dengine

That’s what I’m calling Houdini Engine + UE4 now, just to continue the tradition of me being annoying.

Right. Houdini stuff.
I made a digital asset:

Network.png

There are two inputs, which will get fed in from UE4 (later).
In the Houdini scene, #1 input is the object I want to generate a decal on, object #2 is a projection plane.

The stuff on the left is actually all redundant, but what I was planning to do was construct layout patterns in Houdini for different decals on one sheet, and let Houdini just automatically do the UV layout. But procedural UV’ing got super annoying, so I decided not to do that.

Anyway…

Extrude plane, cookie with box:

ExtrudeAndCookie.png

Delete faces that are on the opposite side of the projection (dot product driven delete sop, basically).

Since I couldn’t really get the UVs working the way I wanted, I created a centre point on the projection plane, get the normal and constructed U and V vectors, which I then project onto the verts in the decal mesh.

I did that all in VEX, because it seemed like a good idea at the time.

I was fairly annoyed with working on it by this point, so I just exposed the rotation and scale of the decal so you can play with it in Unreal 🙂

AutoDecalParams.png

Back in UE4

With that done, and the thing saved as a Houdini Digital Asset, time to load up a shamefully unfinished UE4 project (there are lots of choices here…).

The workflow is:

  • Load the digital asset into the content browser.
  • Drag a copy into the scene.
  • Using “World Outliner Input”, Select a plane for the projection, and an object to put decals on:

AutoDecal_outlinerSelect.png

Bam! New decal mesh, floating over the top of the original object, you can save it out using the Houdini engine bake stuff, or whatever you want to do.

Conclusion

I didn’t bother taking this too far, because I don’t really intend to use it myself, but if I thought it was going to be useful there are a bunch of things I’d probably do.

I mean, aside from completely re-build it from scratch, because it’s a whole bunch of broken hack right now…

  • Expose a few different projection types
  • Create separate Houdini asset that lets you lay out planes on a decal sheet to define regions for different decals (which I started on)
  • Make it work with multiple planes passed into the one asset

With any luck, Epic will just come along with a similar workflow where you can press a button on a projected decal in editor, and it will do this sort of thing for you 🙂

(In the meantime, I’ll just stick with manually doing it in Modo, thanks very much…)

 

City scanner scene – Breakdown pt2

October 13, 2016

Webs.gif

This is part 2 of the breakdown for my recent scene Half-Life 2 scanner scene (part 1 here).

This time, I’m going to focus on the Houdini web setup.

Although it took me a while to get a very subtle result in the end, it was a fun continuing learning experience, and I’m sure I’ll re-use a bunch of this stuff!

Go go Gadget webs!

I saw a bunch of really great photos of spider webs in tunnels (which you can find yourself by googling “tunnel cobwebs concrete” :)).

I figured it would be a fun time to take my tunnel into Houdini, and generate a bunch of animated hanging webby things, and bring them back into UE4.

This fun time ended up looking like a seahorse:

itsaseahorselol.png

I will break this mess a bit 🙂

Web starting points

PointsAndRaysGraph.png

I import the geometry for the tunnel and rails, and scatter a bunch of points over it, setting their colour to red.

On the right hand side of the seahorse is a set of nodes for creating hanging webs, which is just some straight down line primitives, with a few attributes like noise and thickness added to them.
I’ll come back to these later:

HangingWebs.png

In the top middle of the seahorse, I have a point vop apply two layers of noise to the colour attribute, and also blend the colour out aggressively below the rails, because I only wanted webs in the top half of the tunnel.

The web source points look like this:

WebPoints.png

From these points, I ray cast out back to the original geometry.

Ray casting straight out of these points would be a little boring, though, so I made another point vop that randomizes the normals a little first:

WebNormals.gif

After this, I have a few nodes that delete most of the points generated from the pipe connections: they have a high vertex density, compared to every other bit of mesh, so when I first ran the thing, I had a thousand webs on the pipe connections.
I also delete really small webs, because they look lame.

We are now at seahorse upper left.

Arcy Strangs.

ArcyStrangs.png

Not sure what I was thinking when naming this network box, but I’m rolling with it.

So anyway, the ray cast created a “dist” attribute for distance from the point to the ray hit, in the direction of the normal.

So my “copy1” node takes a line primitive, copies it onto the ray points, sets the length of the line to the “dist” attribute (my word, stamping is such a useful tool in Houdini).

CopyLines.png

Before the copy, I set the vertex red channel from black to red along the length of the line, just for convenience.

Previous up the chain, I found the longest of all the ray casts, and saved it off in a detail attribute. This is very easy to do by just using Attribute Promote, using Maximum as the Promotion Method.

So, I now define a maximum amount of “droop” I want for the webs, a bit of random droop, and then I use those values to move each point of each web down in Y a bit.

WebDroop.png

I use sample that ramp parameter up there using the web length, and then multiply that over the droop, so that each end of the web remains fastened in place.
And I don’t really care if webs intersect with the rails, because that’s just how I roll…

Fasten your seatbelts, we are entering seahorse spine.

Cross web connecty things

ConnectingWebStrands.png

For each of the webs in the previous section, I create some webs bridging between them.
Here’s the network for that.

ConnectingStrands.png

I use Connect Adjacent Pieces, using Adjacent Pieces from Points, letting the node connect just about everything up.

I use a carve node to cut the spline up, then randomly sort the primitives.

At this point, I decided that I only want two connecting pieces per named web, and I got lazy so I wrote vex for this:

string CurrentGroupName = "";

string PickedPieces[];
int PieceCount[];

int MaxPerPiece = 2;
int success = 0;

addprimattrib(geoself(), "toDelete", 0, "int");

for (int i = 0; i < nprimitives(geoself()); i ++)
{
    string CurrentName = primattrib(geoself(), "name", i, success);

    int FindIndex = find(PickedPieces, CurrentName);
    
    if (FindIndex < 0)
    {
        push(PickedPieces, CurrentName);        
        push(PieceCount, 1);
    }
    else
    {  
        int CurrentPieceCount = PieceCount[FindIndex];
        
        if (CurrentPieceCount >= MaxPerPiece)
        {
            setprimattrib(geoself(), "toDelete", i, 1, "set");
        }
        else
        {
            PieceCount[FindIndex] = CurrentPieceCount + 1;
        }
    }
    
    setprimattrib(geoself(), "name", i, CurrentName);
}

So that just creates an attribute on a connecting piece called “toDelete”, and you can probably guess what I do with that…

The rest of the network is the same sort of droop calculations I mentioned before.

One thing I haven’t mentioned up to this point, though, is that each web has a “Primitive ID” attribute. This is used to offset the animation on the webs in UE4, and the ID had to get transferred down the chain of webs to make sure they don’t split apart when one web meets another.

At this point, I add a bunch of extra hanging webs off these arcy webs, and here we are:

AllWebWires.png

Then I dump a polywire in, and we’re pretty much good to go!

Well… Ok. There’s the entire seahorse tail section.

For some reason, Polywire didn’t want to generate UVs laid out along the web length.

I ended up using a foreach node on each web, stacking the web sections up vertically in UV space, using a vertex vop, then welding with a threshold:

LayoutUVs.png

Since I have the position, 0-1, along the current web, I could use that to shift the UV sections up before welding.

With that done on every web, my UVs look like this:

UVsHoriz.png

Which is fine.
When I import the meshes into UE4, I just let the engine pack them.

Seriously, though… These are the sorts of meshes that I really wish I could just bake lighting to vertex colours in UE4 instead of a lightmap.
It would look better, and have saved me lots and lots of pain…

And here we are, swing amount in red vertex channel, primitive offset (id) in green:

FinalWebs.png

Web contact meshes

I wanted to stamp some sort of mesh / decal on the wall underneath the hanging meshes.
If you have a look back at the top of the seahorse, you might notice an OUT_WebHits node which contains all the original ray hits.

I’m not going to break this down completely, but I take the scatter points, bring in the tunnel geometry, and use the scatter points to fracture the tunnel.

I take that, copy point colour onto the mesh, and subdivide it:

WallWebsSubd.png

Delete all the non red bits, push the mesh out along normals with some noise, polyreduce, done 🙂

WallWebsFinal.png

I could have done much more interesting things with this, but then life is full of regrets isn’t it?

Back to UE4

So, export all that stuff out, bring it into UE4.

Fun story, first export I did was accidentally over 1 million vertices, and the mesh still rendered in less than half a millisecond on a GeForce 970.
We are living in the future, people.

CobwebsMaterial.png

Most of this material is setting up the swinging animation for the webs, using World Position Offset.

There’s two sets of parameters for everything: One for when the web is “idle”, one for when it is being affected by the Scanner being near it.

To pass the position of the scanner into the material, I have to set up a Dynamic Material Instance, so this is all handled in the web blueprint (which doesn’t do much else).

It also passes in a neutral wind direction for when the webs are idle, which I set from the forward vector of an arrow component, just to make things easy:

WindDirection.png

So now I have the scanner position, for each vertex in each web I get the distance between it, and the scanner, and use that to lerp between the idle and the “windy” settings.

All of these values are offset by the position id that I put in the green channel, so that not all of the webs are moving at exactly the same time.

Still to come…

Animation approach from Modo to blueprints, lighting rig for the scanner, all the fun stuff! 🙂

City scanner scene – Breakdown pt1

October 12, 2016

EnvWideShot.png

In this post, I’ll go through the construction of the environment for my recently posted Half Life 2 scanner scene.

The point of this project was really just to do a bit of animation on my scanner, and show it off in a simple environment. I can’t remember the last time I did any animation, but my guess would be when I was studying at the AIE over ten years ago 🙂

So with that in mind, figuring I was going to struggle with the animation side, I wanted to keep the environment dead simple. It was always going to be dark, anyway, since I wanted the scanner to light the scene!

Modelling / texturing the tunnel

I looked up a bunch of photo reference for cool tunnels in Europe, presumably the sort of thing that the resistance in city 17 would have used 🙂

I blocked out basic lighting, camera setup, and created the tunnel out of cubes in UE4.
Once I was happy with the layout, I could then just export the blocked out mesh to FBX to use as a template in Modo:

WIP_ExportBlockout.png

I also took the time to make a really basic animatic.
I changed the path of the scanner quite a bit, and timing, etc, but I still found this to be useful:

Anyway, at this point, the scene blockout is in Modo, and I can start building geometry:

WIP_SceneBlockoutModo.png

The geometry itself is dead simple, so I won’t go into that too much, I just extruded along a spline, then beveled and pushed a few edge loops around 🙂

I always use the sculpt tools to push geometry around a little, just to make things feel a bit more natural. Here specifically I was sinking some of the vertices on the side pathways:

WIP_PushVertsModo.png

Layered vertex painted materials can be expensive, so I wanted to avoid going too far down that path.
In the end, I settled on having two layers: concrete, and moldy damp green stuff:

WIP_WallMaterial.png

The green stuff is vertex paint blended on, and the vertex colours for the mask was done in UE4 rather than in Modo, just because it is quick and easy to see what I’m doing in editor.

Most of the materials in the scene were made in Substance painter.
And I’m lazy, so they are usually a couple of layers with procedural masks, and one or two hand painted masks 🙂

substancepainterconcrete

Water plane

Water.gif

For the purposes of this scene, I could get away with a pretty low tech / low quality water plane. As long as it had some movement, and is reflective, then it would do!

The engine provides flow map samples and functions in the content samples, so I just used those. I’ve written my own ones before (and by that, I mean I copied what they were doing in the Portal 2 Water Flow presentation from siggraph 2010), but the UE4 implementation does exactly what I wanted 🙂

And seriously, if you haven’t looked at that presentation, go do it.
They used Houdini to generate water flow, but I’m lazy and ain’t got time for that! (Not for this scene, at any rate).

I just generated mine in Photoshop, using this page as a guide:

Photoshop generated flow maps

At some point, I’d like to see if I can set up the same workflow in Substance Painter and/or Houdini.

Anyway, the material is a bit messy (sorry):

watermaterial

I’m passing the flowmap texture and some timing parameters into the flowmaps material function, and getting a new normal map out of it.

The only other thing going on here is that I have a mask for the edges of the water, where it is interacting with the walls. I blend in different subsurface colour, normal strength and roughness at the edges.

Fog planes

FogPlanes.png

I’ve got a few overlapping fog planes in the scene, with a simple noisy texture, offset by world position (having a different offset on each make it feel a little more volumetric).

Much like the water, the fog plane has a subtle flow map on it, to fake a bit of turbulence, and the material uses depth fade on opacity to help it blend with the surrounding geometry:

fog

UE4 4.13 mesh decals

I was going to use a bunch of the new 4.13 features originally, but in the end I think the only one I used was “mesh decals”.

These are decals in the old school sense, not the projected decals that UE4 users have probably come to love. In the back of my mind, I had thought I might turn this into a VR scene at some point, and the cost of projected decals is a somewhat unknown commodity for me at the moment.

The main advantage of mesh decals, vs floating bits of geometry with Masked materials, is that mesh decals support full alpha blending.

In these shots, the water puddle, stain and concrete edge damage are all on part of the same decal sheet:

The decals are all using Diffuse, Normals, Roughness, Metallic and Occlusion (the last three packed together):

DecalsTextures.pngI built the decals one at a time, without much planning, basically guessing at how much texture space I thought I was going to need (I didn’t bother setting a “texels per metre” type of limit for my project, but that probably would have been sensible).

Each time I wanted a new mesh decal, I’d work out in Modo how big I want it first:

ModoDecalMeshes.png

Then I’d copy it into a separate Modo scene just for Decal Layout which I take into Substance Painter.
I just did this so I could keep all the mesh together in one space, to keep it easy for painting:

ModoDecalScene.png

And then here is the scene in Substance:

SubstancePainterDecalScene.png

And here is the scene with and without decals:

meshdecals

What’s great about this, is that mesh decals don’t show up in Shader Complexity, so the tech artists on the project will never know… (I kid, I kid. They will find them in PIX, and will hunt you down and yell at you).

I really like this approach to building wear and tear into materials. The first time I saw this approach was when I was working at Visceral Games in Melbourne, and the engine was very well optimized to handle a pretty huge amount of decals. I didn’t embrace it as much as I should have, back then.

Rails

A few years back, I made a blueprint for pipes that allowed joining sections, etc.
So I knocked together a model in Modo for the connection pieces:

RailBracketModo.png

Edge-weighted sub-d, of course, because I can’t help myself 🙂
I even started sculpting in some heavy rust, but had to have a stern word to myself about not spending too much time on stuff that isn’t even going to be lit…

Textured in Substance Painter:

railbracketsubstance

Same dealio with the pipe segments:

railsubstance

Then I just built the spline in the editor, and set it up like in my old blog post.

Much like I did with the original blockout geometry, I also exported the final pipes back out to Modo so that I could use them to work out where I wanted to put some decals.

The only other thing that was a pain, was that the pipes need lightmaps, but I couldn’t work out a way to generate unique UVs for the final pipe mesh.

In the end, I just used the merge actors function in the editor, so that they all became a single static mesh, and let Unreal generate lightmap UVs.

Webs

Did you notice that there were hanging spider webs in the scene?
No? Good, because I don’t like them much 😛

I probably spent 10-20 hours just messing about with these silly things, but at least I got some fun gifs out of them:

BusySpiders.gif

Next up…

I’ll break down the construction of those web things, might be useful for a scene full of badly animated vines, I suppose…

I’ll also go through all of the silly things I did on the animation / blueprint / lighting side.

Factory – pt 4 – (Trimming the flowers)

April 10, 2016

Part 4 of https://geofflester.wordpress.com/2016/02/07/factory-pt-1/

FlowerPower

Alpha card objects

In most games, you have some objects that have on/off alpha transparency, generally for objects that you wouldn’t model all the detail for (leaves, flowers, etc).

AlphaCard

^ Exactly like that, beautiful isn’t it?
Years of art training not wasted at all…

Also referred to as punch-through / 1-bit / masked materials, btw.
So, you can see that the see-through portion of that polygon is pretty large.

When rendering these types of assets, you are still paying some of the cost for rendering all of those invisible pixels. If you are rendering a lot of these on screen, and they are all overlapping, that can lead to a lot of overdraw, so it’s fairly common to cut around the shape to reduce this, something like this:

CutAround

What does this have to do with the factory?
Aren’t you supposed to be building a factory?

I get distracted easily…

I’m not really planning to have a lot of unique vegetation in my factory scene, but I am planning on generating a bunch of stuff out of Houdini.

When I create LODs, that will be in Houdini too, and the LODs will probably be alpha cards, or a combination of meshes and alpha cards.

When I get around to doing that, I probably don’t want to cut around the alpha manually, because… Well, because that sounds like a lot of work, and automating it sounds like a fun task 🙂

Houdini mesh cutting tool

The basic idea is to get my image plane, use voronoi fracture to split up the plane, delete any polygons that are completely see-through, export to UE4, dance a happy dance, etc.

For the sake of experiment, I want to try a bunch of different levels of accuracy with the cutting, so I can find a good balance between vertex count, and overdraw cost.

Here’s the results of running the tool with various levels of cutting:

FlowerCutouts

Here’s what the network looks like, conveniently just low resolution enough so as to be totally no use… (Don’t worry, I’ll break it down :))

FullNetwork

The first part is the voronoi fracture part:

FracturePart_nodes

I’m subdividing the input mesh (so that I end up with roughly a polygon per pixel), then use an Attribute VOP to copy the alpha values from the texture onto the mesh, then blur it a bunch:

AlphaBlur_plane

I scatter points on that, using the alpha for density, then I join it with another scatter that is just even across the plane. This makes sure that there are enough cuts outside the shape, and I don’t get weird pointy polygons on the outside of the shape.

Here is an example where I’ve deliberately set the even spread points quite low, so you can see the difference in polygon density around the edges of the shape vs inside the shape:

FracturePart_plane2.png

Counting up the alpha

So, earlier, I mentioned that I subdivided up the input mesh and copied the alpha onto it?
I’ll call this the pixelated alpha mesh, and here’s what that looks like:

AlphaPrims

 

Next, I created a sub network that takes the pixelated alpha mesh, pushes it out along its normals (which in this case, is just up), ray casts it back to the voronoi mesh, and then counts up how many “hits” there are on each voronoi polygon.

Then we can just delete any polygon that has any “hits”.

Here is that network:

ProjectAndGetHits

After the ray sop, each point in the pixelated alpha mesh has a “hitprim”, which will be set to the primitive id that it hit in the voronoi mesh.

I’m using an Attribute SOP to write into a integer array detail attribute on the voronoi mesh for each “hitprim” on the pixelated alpha mesh points, and here’s that code:


int success = 0;
int primId = pointattrib(0, &quot;hitprim&quot;, @ptnum, success);

int primhits[] = detail(0, &quot;primhits&quot;);

if (primId &gt;= 0)
{
setcomp(primhits, 1, primId);
setdetailattrib(0, &quot;primhits&quot;, primhits, &quot;add&quot;);
}

After all that stuff, I dump a “remesh” node, which cheapens up the mesh a lot.

And back to UE4…

So, with all the above networks packaged into a Digital Asset, I could play with the parameters (the two scatter values), and try out a few different levels of cutting detail, as I showed before:

FlowerCutouts

I’m creating a rather exaggerated setup in UE4 with an excessive amount of overdraw, just for the purposes of this blog post.

For now, I’ve made the alpha cards huuuuuuuuge, and placed them where I used to have the flowers in my scene:

UE4Scene

Then, all I need to do is swap in each different version of my alpha card, and then GPU profile!

Profiling

The camera shot, without any alpha plane cutting optimization, took about 12 ms.

Test1, which is 27 vertices, seemed to be the best optimization. This came in at about 10.2 ms, so a saving of more than 1.5 ms, which is pretty great!

I was actually expecting Test2 to be the cheapest, since it chops quite a bit more off the shape, and at 84 vertices I didn’t think the extra vertex cost would even register on a GTX 970. Turns out I was wrong, Test2 was marginally more expensive!

This just goes to show, never trust someone about optimization unless they’ve profiled something 😛

Test3, at 291 vertices, costs about another 0.3 ms.

Conclusion

Winner.png

Of course, the savings are all quite exaggerated, but in a “real world” scenario I would probably expect to have a lot more instances, all a lot smaller. In which case, going with the lower vertex count mesh seems like it would still make sense (although I will, of course, re-profile when I have proper meshes).

Lots of more fun things to do with this: get it working on arbitrary meshes (mostly working), see if I can use Houdini engine and integrate it into UE4, etc.
Still not sure how much vegetation I’ll have in my factory scene, but I think this will still be useful 🙂

 

 

 

Factory – pt 3 (early optimisation is something something)

March 3, 2016

Part 3 of https://geofflester.wordpress.com/2016/02/07/factory-pt-1/

Optimizing art early, before you have a good sense of where the actual expense of rendering your scene is, can be a pretty bad idea.

So let’s do it!!

Wut

Chill.
I’ll do it #procedurally.
Sort of.

20 gallons of evil per pixel

My ground shader is pretty expensive. It’s blending all sorts of things together, currently, and I still have things to add to it.

I don’t want to optimize the actual material yet, because it’s not done, but it looks like this and invokes shame:

WetGroundMaterial.png

As a side note here, this material network looks a bit like the Utah teapot, which is unintentionally awesome.

Every pixel on this material is calculating water and dirt blending.

But many of those pixels have no water or dirt on them:

NonBlendingAreas.png

So why pay the cost for all of that blending across the whole ground plane?
What can I do about it?

Probably use something like the built in UE4 terrain, you fool

Is probably what you were thinking.
I’m sure that UE4 does some nice optimization for areas of terrain that are using differing numbers of layers, etc.

So you’ve caught me out: The technique I’m going to show off here, I also want to use on the walls of my factory, I just haven’t built that content yet, and I thought the ground plane would be fun to test on 🙂

Back to basics

First up, I want to see exactly how much all of the fancy blending is costing.

So I made a version of the material that doesn’t do the water or the dirt, ran the level and profiled them side by side:

BlendVsNot.png

^ Simple version of my material vs the water and dirt blending one.

GPUProfile

So, you can see above that the material that has no dirt/water blending is 1.6 milliseconds cheaper.

Now, if I can put that material on the areas that don’t need the blending, I can’t expect to get that full 1.6 milliseconds back, but I might get 1 millisecond back.

That might not sound like much, but for a 60 fps game, that’s about 1/16th of the entire scene time.

Every little bit helps, getting that time back from cutting content alone can take many hours 🙂

Splitting the mesh

To put my cheap material onto the non-blending sections, I’ll split the mesh around the areas where the vertex colour masks have a value of 0.

Luckily, the ground plane is subdivided quite highly so that it plays nice with UE4 tessellation and my vertex painting, so I don’t need to do anything fancy with the mesh.

Back to Houdini we go!

PolySplit.png

So, anything that has > 0 sum vertex colour is being lifted up in this shot, just to make it obvious where the mesh split is happening.

Here’s the network:

BlendMeshSplit.png

The new nodes start at “Attribcreate”, etc.

The basic flow is:

  • “Colour value max” is set as max(@Cd.r, @Cd.g), per point, so it will be set to some value if either dirt or water are present.
  • Two new Max and Min attributes per polygon are created by promoting Colour Value max from Point –> Polygon, using Min and Max promotion methods (so if one vertex in the polygon has some dirt/water on it, the then max value will be non zero, etc)
  • The polygons are divided into three groups: Polygons that have no vertices with any blending, Polygons that have some blending, Polygons that have all verts that are 100% blending.
  • NOTE: For the purposes of this blog post, all I really care about is if the Polygon has no dirt/water or if it has some, but having the three groups described above will come in handy in a later blog post, you’ll just have to trust me 🙂
  • The two groups of polygons I care about get two different materials applied to them in Houdini.
    When I export them to UE4, they maintain the split, and I can apply my cheaper material.

So, re-exported, here it is:

BothMaterials.png
Looks the same?

Great, mission successful! Or is it…

Checking the numbers

Back to the GPU profiler!

GPUProfileReveal.png

Ok, so the column on the right is with my two materials, the column in the middle is using the expensive material across the whole ground plane.

So my saving was a bit under one millisecond in this case.
For an hour or two of work that I can re-use in lots of places, I’m willing to call that a success 🙂

Getting more back

Before cleaning up my shader, there’s a few more areas I can/might expand this, and some notes on where I expect to get more savings:

  • I’ll have smaller blending areas on my final ground plane (less water and dirt) and also on my walls. So the savings will be higher.
  • I might mask out displacement using vertex colours, so that I’m not paying for displacement across all of my ground plane and walls.
    The walls for flat sections not on the corner of the building and/or more than a few metres from the ground can go without displacement, probably.
  • The centre of the water puddles is all water: I can create a third material that just does the water stuff, and split the mesh an extra time.
    This means that the blending part of the material will be just the edges of the puddles, saving quite a lot more.

So all in all, I expect I can claw back a few more milliseconds in some cases in the final scene.

One final note, the ground plane is now three draw calls instead of one.
And I don’t care.
So there. 🙂

 

 

 

 

 

 

Factory – pt 2 (magical placeholder land)

February 17, 2016

Part 2 of: https://geofflester.wordpress.com/2016/02/07/factory-pt-1/

FlowersInDirt

I had to split this post up, so I want to get this out of the way:
You’re going to see a lot of ugly in the post. #Procedural #Placeholder ugly 🙂

This post is mostly about early pipeline setup in Houdini Python, and UE4 c++.

Placeholder plants

For testing purposes, I made 4 instances of #procedural plants using l-systems:

UniqueFlowers

When I say “made”, I mean ripped from my Shangri-La tribute scene, just heavily modified:

https://geofflester.wordpress.com/2015/09/05/rohan-dalvi-shangri-la-themed-procedural-islands/

Like I mention in that post, if you want to learn lots about Houdini, go buy tutorials from Rohan Dalvi.
He has some free ones you can have a run through, but the floating islands series is just fantastic, so just buy it 😛

These plants I exported as FBX, imported into UE4, and gave them a flat vertex colour material, ’cause I ain’t gonna bother with unwrapping placeholder stuff:

UE4Flowers

The placeholder meshes are 4000 triangles each.
Amusingly, when I first brought them in, I hadn’t bothered checking the density, and they were 80 000 + triangles, and the frame rate was at a horrible 25 fps 😛

Houdini –> UE4

So, the 4 unique plants are in UE4. Yay!

But, I want to place thousands of them. It would be smart to use the in-built vegetation tools in UE4, but my purpose behind this post is to find some nice generic ways to get placement data from Houdini to UE4, something that I’ve been planning to do in my old Half Life scene for ages.
So I’m going to used Instanced Static Meshes 🙂

Generating the placements

For now, I’ve gone with a very simple method of placing vegetation: around the edges of my puddles.
It will do for sake of example. So here’s the puddle and vegetation masks in Houdini (vegetation mask on the left, puddle mask on the right):

PuddleAndVegeMask

A couple of layers of noise, and a fit range applied to vertex colours.

I then just scatter a bunch of points on the mask on the left, and then copy flowers onto them, creating a range of random scales and rotations:

FlowersOnMask.png

The node network for that looks like this:

PuttingPointsOnThePlane.png

Not shown here, off to the left, is all the flower setup stuff.
I’ll leave that alone for now, since I don’t know if I’ll be keeping any of that 🙂

The right hand side is the scattering, which can be summarized as:

  • Read ground plane
  • Subdivide and cache out a the super high poly plane
  • Move colour into Vertex data (because I use UVs in the next part, although I don’t really have to do it this way)
  • Read the brick texture as a mask (more on that below)
  • Move mask back to Point data
  • Scatter points on the mask
  • Add ID, Rotation and Scale data to each point
  • Flip YZ axis to match UE4 (could probably do this in Houdini prefs instead)
  • Python all the things out (more on that later)

Brick mask

I mentioned quickly that I read the brick mask as a texture in the above section.
I wanted the plants to mostly grow out of cracks, so I multiplied the mask by the inverted height of the bricks, clamped to a range, using a Point VOP:

BrickTextureOnMask.png

And here’s the network, but I won’t explain that node for node, it’s just a bunch of clamps and fits which I eyeballed until it did what I wanted:

HeightTextureVOP.png

Python all the things out, huh?

Python and I have a special relationship.
It’s my favourite language to use when there aren’t other languages available.

Anyway… I’ve gone with dumping my instance data to XML.
More on that decision later.

Now for some horrible hackyness:


node = hou.pwd()
from lxml import etree as ET

geo = node.geometry()

root = ET.Element("ObjectInstances")

for point in geo.points():
pos         = point.position()
scale       = hou.Point.attribValue(point, 'Scale')
rotation    = hou.Point.attribValue(point, 'Rotation')
scatterID   = "Flower" + repr(hou.Point.attribValue(point, 'ScatterID')+1)

PosString       = repr(pos[0]) + ", " + repr(pos[1]) + ", " + repr(pos[2])
RotString       = repr(rotation)
ScaleString     = repr(scale) + ", " + repr(scale) + ", " + repr(scale)

ET.SubElement(root, scatterID,
Location=PosString,
Rotation=RotString,
Scale=ScaleString)

# Do the export
tree = ET.ElementTree(root)
tree.write("D:/MyDocuments/Unreal Projects/Warehouse/Content/Scenes/HoudiniVegetationPlacement.xml", pretty_print=True)

NOTE: Not sure if it will post this way, but in Preview the tabbing seems to be screwed up, no matter what I do. Luckily, programming languages have block start and end syntax, so this would never be a prob… Oh. Python. Right.

Also, all hail the ugly hard coded path right at the end there 🙂
(Trust me, I’ll dump that into the interface for the node or something, would I lie to you?)

Very simply, this code exports an XML element for each Point.
I’m being very lazy for now, and only exporting Y rotation. I’ll probably fix that later.

This pumps out an XML file that looks like this:

<ObjectInstances>
<Flower1 Location=-236.48265075683594, -51.096923828125, -0.755022406578064 Rotation=(0.0, 230.97622680664062, 0.0) Scale=0.6577988862991333, 0.6577988862991333, 0.6577988862991333/>

</ObjectInstances>

Reading the XML in UE4

In the spirit of slapping things together, I decided to make a plugin that would read the XML file, and then add all the instances to my InstancedStaticMesh components.

First up, I put 4 StaticMeshActors in the scene, and in place I gave them an InstancedStaticMesh component. I could have done this in a Blueprint, but I try to keep Blueprints to a minimum if I don’t actually need them:

InstancedStaticMesh

As stated, I’m a hack, so the StaticMeshActor needs to be named Flower<1..4>, because the code matches the name to what it finds in the XML.

The magic button

I should really implement my code as a either a specialized type of Data Table, or perhaps some sort of new thing called an XMLInstancedStaticMesh, or… Something else clever.

Instead, I made a Magic Button(tm):

MagicButton

XML Object Loader. Probably should have put a cat picture on that, in retrospect.

Brief overview of code

I’m not going to post the full code here for a bunch of reasons, including just that it is pretty unexciting, but the basic outline of it is:

  1. Click the button
  2. The plugin gets all InstancedStaticMeshComponents in the scene
  3. Get a list of all of the Parent Actors for those components, and their labels
  4. Process the XML file, and for each Element:
    • Check if the element matches a name found in step 3
    • If the Actor name hasn’t already been visited, clear the instances on the InstancedStaticMesh component, and mark it as visited
    • Get the position, rotation and scale from the XML element, and add a new instance to the InstancedStaticMesh with that data

And that’s it! I had a bit of messing around, with originally doing Euler –> Quaternion conversion in Houdini instead of C++, and also not realizing that the rotations were in radians, but all in all it only took a hour or two to throw together, in the current very hacky form 🙂

Some useful snippets

The FastXML library in UE4 is great, made life easy:

https://docs.unrealengine.com/latest/INT/API/Runtime/XmlParser/FFastXml/index.html

I just needed to create a new class inheriting from the IFastXmlCallback interface, and implement the Process<x> functions.

I’d create a new instance in ProcessElement, then fill in the actual data in ProcessAttribute.

Adding an instance to an InstancedStaticMeshComponent is as easy as:


SomeStaticMeshComp->AddInstance(FTransform());

And then, in shortened form, updating the instance data:


FTransform InstanceTransform;
_currentStaticMeshComp->GetInstanceTransform(_currentInstanceID, InstanceTransform);

// ...

InstanceTransform.SetLocation(Location);
InstanceTransform.SetRotation(RotationQuaternion);
InstanceTransform.SetScale3D(Scale);

_currentStaticMeshComp->UpdateInstanceTransform(_currentInstanceID, InstanceTransform);

One last dirty detail…

That’s about it for the code side of things.

One thing I didn’t mention earlier: In Houdini, I’m using the placement of the plants to generate out the dirt map mask so I can blend in details around their roots:

DirtRootsMask.png

So when I export out my ground plane, I am putting the Puddles mask into the blue channel of the vertex colours, and the Dirt mask into the red channel of the vertex mask 🙂

Still to come (for vegetation)

So I need to:

  • Make the actual flowers I want
  • Make the roots/dirt/mossy texture that gets blended in under the plants
  • Build more stuff

Why.. O.o

Why not data tables

I’m all about XML.

But a sensible, less code-y way to do this would be to save all your instance data from Houdini into CSV format, bring it in to UE4 as a data table, then use a Construction Script in a blueprint to iterate over the data and add instances to an Instanced Static Mesh.

I like XML as a data format, so I decided it would be more fun to use XML.

Why not Houdini Engine

That’s a good question…

In short:

  • I want to explore similar workflows with Modo replicators at some point, and I should be able to re-use the c++/Python stuff for that
  • Who knows what other DCC tools I’ll want to export instances out of
  • It’s nice to jump into code every now and then. Keeps me honest.
  • I don’t own it currently, and I’ve spent my software budget on Houdini Indie and Modo 901 already 🙂

If you have any questions, feel free to dump them in the comments, I hurried through this one a little since it’s at a half way point without great results to show off yet!

 

 

Factory – pt 1

February 7, 2016

This blog post won’t mostly be about a factory, but if I title it this way, it might encourage me to finish something at home for a change 😉

My wife had a great idea that I should re-make some of my older art assets, so I’m going to have a crack at this one, that I made for Heroes Over Europe, 8 years ago:

Factory

I was quite happy with this, back in the day. I’d had quite a lot of misses with texturing on that project. The jump from 32*32 texture sheets on a PS2 flight game to 512*512 texture sets was something that took a lot of adjusting to.

I was pretty happy with the amount of detail I managed to squeeze out of a single 512 set for this guy, although I had to do some fairly creative unwrapping to make it happen, so it wasn’t a very optimal asset for rendering!

The plan

I want to make a UE4 scene set at the base of a similar building.
The main technical goal is to learn to use Substance Painter better, and to finally get back to doing some environment art.

Paving the way in Houdini

First up, I wanted to have a go at making a tiling brick material in Substance Painter.
I’ve used it a bit on and off, in game jams, etc, but haven’t had much chance to dig into it.

Now… This is where a sensible artist would jump into a tool like ZBrush, and throw together a tiling high poly mesh.

But, in order to score decently on Technical Director Buzz Word Bingo, I needed to be able to say the word Procedural at least a dozen more times this week, so…

HoudiniBricks

I made bricks #Procedurally in Houdini, huzzah!

I was originally planning to use Substance Designer, which I’ve been playing around with on and off since Splinter Cell: Blacklist, but I didn’t want to take the time to learn it properly right now. The next plan was Modo replicators (which are awesome), but I ran into a few issues with displacement.

Making bricks

Here is the network for making my brick variations, and I’ll explain a few of the less obvious bits of it:

BricksNetwork

It a little lame, but my brick is a subdivided block with some noise on it:

Brick.jpg

I didn’t want to wait for ages for every brick to have unique noise, so the “UniqueBrickCopy” node creates 8 unique IDs, which are passed into my Noise Attribute VOP, and used to offset the position for two of the noise nodes I’m using on vertex position, as you can see bottom left here:

NoiseVOP.jpg

So that the repetition isn’t obvious, I randomly flip the Y and Z of the brick, so even if you get the same brick twice in a row, there’s less chance of a repeat (that’s what the random_y_180 and random_z_180 nodes are at the start of this section).

Under those flipping nodes, there are some other nodes for random rotations, scale and transform to give some variation.

Randomness

Each position in my larger tiling pattern has a unique ID, so that I can apply the same ID to two different brick placements, and know that I’m going to have the exact same brick (to make sure it tiles when I bake it out).

You can see the unique IDs as the random colours in the first shot of the bricks back up near the top.

You might notice (if you squint) that the top two and bottom two rows, and left two and 2nd and 3rd from the right rows have matching random colours.

Placing the bricks in a pattern

There was a fair bit of manual back and forth to get this working, so it’s not very re-usable, but I created two offset grids, copied a brick onto each point of the grid, and played around with brick scale and grid offsets until the pattern worked.

BrickPointsNetwork.jpg

So each grid creates an “orientation” attribute, which is what rotates the bricks for the alternating rows. I merge the points together, sort them along the X and Y axis (so the the vertex numbers go up across rows).

Now, the only interesting bit here is creating the unique instance ID I mentioned before.
Since I’ve sorted the vertices, I set the ID to be the vertex ID, but I want to make sure that the last two columns and the last two rows match with the first and last columns and rows.

This is where the two wrangle nodes come in: they just check if the vertex is in the last two columns, and if it is, set the ID to be back at the start of the row.

So then we have this (sorry, bit hard to read, but pretend that the point IDs on the right match those on the left):

PointIDs.jpg

And yes, in case you are wondering, this is a lot of effort for something that could be easier done in ZBrush.
I’m not in the habit of forcing things down slow procedural paths when there is no benefit in doing so, but in this case: kittens!
(I’ve got to break my own rules sometimes for the sake of fun at home :))

Painter time

Great, all of that ugly #Procedural(tm) stuff out of the way, now on to Substance Painter!

PainterBase.jpg

So I’ve brought in the high poly from Houdini, and baked it out onto a mesh, and this is my starting point.
I’m not going to break down everything I’ve done in Substance, but here are the layers:

TextureLayers.gif

All of the layers are #Procedural(tm), using the inbuilt masks and generators in Painter, which use the curvature, ambient occlusion and thickness maps that Painter generates from your high poly mesh.

The only layer that had any manual input was the black patches, because I manually picked a bunch of IDs from my Houdini ID texture bake, to get a nice distribution:

IDPicking.jpg

The only reason I picked so many manually is that Painter seems to have some issues with edge pixels in a Surface ID map, so I had to try and not pick edge bricks.
Otherwise, I could have picked a lot less, and ramped the tolerance up more.

You might notice that the material is a little dark. I still haven’t nailed getting my UE4 lighting setup to match with Substance, so that’s something I need to work on.
Luckily, it’s pretty easy to go back and lighten it up without losing any quality 🙂

Testing in UE4

UE4Plane.jpg

Pretty happy with that, should look ok with some mesh variation, concrete skirting, etc!
I’ll still need to spend more time balancing brightness, etc.

For giggles, I brought in my wet material shader from this scene:

https://geofflester.wordpress.com/2015/03/22/rising-damp/

UE4PlaneWater.jpg

Not sure if I’ll be having a wet scene or not yet, but it does add some variation, so I might keep it 🙂

Oh, and in case you were wondering how I generated the vertex colour mask for the water puddles… #Procedural(tm)!

HoudiniPuddles.jpg

Exported out of Houdini, a bunch of noise, etc. You get the idea 🙂

Next up

Think I’ll do some vegetation scattering on the puddle plane in Houdini, bake out the distribution to vertex colours, and use it to drive some material stuff in UE4 (moss/dirt under the plants, etc).

And probably export the plants out as a few different unique models, and their positions to something that UE4 can read.

That’s the current plan, anyway 🙂

 

Shopping for masks in Houdini

January 20, 2016

Houdini pun there, don’t worry if you don’t get it, because it’s pretty much the worst…

In my last post, I talked about the masking effects in Shangri-La, Far Cry 4.

I mentioned that it would be interesting to try out generating the rough masks in Houdini, instead of painting them in Modo.

So here’s an example of a mask made in Houdini, being used in Unreal 4:

VortUE4Houdini.gif

Not horrible.
Since it moves along the model pretty evenly, you can see that the hands are pretty late to dissolve, which is a bit weird.

I could paint those out, but then the more I paint, the less value I’m getting out of Houdini for the process.

This is probably a good enough starting point before World Machine, so I’ll talk about the setup.

Masky mask and the function bunch

I’ve exported the Vortigaunt out of Modo as an Alembic file, and bring it into Houdini.
Everything is pretty much done inside a single geometry node:

MaskGen_all

The interesting bit here is “point_spread_solver”. This is where all the work happens.

Each frame, the solver carries data from one vertex to another, and I just manually stop and bake out the texture when the values stop spreading.

I made the un-calculated points green to illustrate:

VortGreen

A note on “colour_selected_white”, I should really do this bit procedurally. I’m always starting the effect from holes in the mesh, so I could pick the edge vertices that way, instead of manually selecting them in the viewport.

The solver

MaskGen_point_spread_solver

Yay. Attribwrangle1. Such naming, wow.

Nodes are fun, right up until they aren’t, so you’ll often see me do large slabs of functionality in VEX. Sorry about that, but life is pain, and all that…

Here’s what the attrib wrangle is doing:

int MinDist = -1;

if (@DistanceFromMask == 0)
{
	int PointVertices[];
	PointVertices = neighbours(0, @ptnum);

	foreach (int NeighborPointNum; PointVertices)
	{
		int success             = 0;
		int NeighborDistance    = pointattrib(
						1, 
						"DistanceFromMask", 
						NeighborPointNum, 
						success);

		if (NeighborDistance > 0)
		{
			if (MinDist == -1)
			{
				MinDist = NeighborDistance;
			}

			MinDist = min(MinDist, NeighborDistance);
		}
	}
}

if (MinDist > 0)
	@DistanceFromMask = (MinDist + 1);

Not a very nuanced way of spreading out the values.

For each point, assuming the point has a zero “distance” value, I check the neighboring points.
If a neighbor has a non-zero integer “distance” value, then I take the lowest of all the neighbors, add one to it, and that becomes my “distance” value.

This causes the numbers to spread out over the surface, with the lowest value at the source points, highest value at the furthest distance.

Integers –> Colours

So, the vertices now all have integer distance values on them.
Back up in the mask image, the solver promotes the Distance value up to a Detail attribute, getting the Max Distance of all the points.

In the wrangle node under that, I just loop through all the points and divide each point’s Distance by the Max Distance, and use that to set the colour, or I set it as green if there’s no distance value:

if (@DistanceFromMask > 0)
{
    @Cd = float(@DistanceFromMask - 1) / float(@DistanceFromMaskMax);
}
else
{
    @Cd = {0,1,0};
}

So that produces the gif I showed earlier with the green on it.

Colours –> Textures

Time to jump into SHOPS. See? This is where my awesome title pun comes in.

As simple as it gets, vertex Colour data straight into the surface output:

Material

In my “Out”, I’m using a BakeTexture node to bake the material into a texture, and I end up with this:

vortigaunt_mask_houdini

Conclusion

Bam! Work is done.
Still wouldn’t have been much point in doing this on Shangri-La, because painting masks in Modo is super quick anyway, but it’s fun to jump back into Houdini every now and then and try new things.

Has led to some other interesting thoughts, though.

  • For Shangri-La, we could have done that at runtime in a compute shader, and generated the mask-out effect from wherever you actually shot an arrow into an enemy.
    That would have been cool.
  • You could probably use Houdini Engine to put the network into UE4 itself, so you could paint the vertex colours and generate the masks all inside UE4.
  • You could do the “erosion” part in Houdini as well, even if you just subdivide the model up and do it using points rather than run it in image space (to avoid seams). Might be hard to get a great resolution out of it.
  • You could do an actual pressure simulation, something along the lines what this Ben Millwood guy did here. He’s a buddy of mine, and it’s a cool approach, and it’s better than my hacky min values thing.