Houdini looping particles

Looping fluid sim

For a while I’d been planning to look into making looping particle systems in Houdini, but hadn’t found a good excuse to jump in. I don’t really do much VFX related work at the best of times, something I need to do more of in the future 🙂

Anyway, I was recently chatting with Martin Kepplinger, who is working on Clans Of Reign, and he was looking to do a similar thing!

So begins the looping particle journey…

Technique overview

I won’t go into the fluid sim setup, it doesn’t really matter too much what it is.

There are a few conditions that make my approach work:

  • Particles must have a fixed lifetime
  • The first chosen frame of the simulation must have a lead up number of frames >= the particle lifetime
  • The last frame of the loop must be >= the first frame number + the particle lifetime

I have some ideas about how to get rid of these requirements, but not sure if I’ll get back to that any time soon.

For the example in this post, I am keeping particle lifetime pretty low (0.8 to 1.0 seconds, using a @life attribute on the source particles, so a 24 frame loop).

The fluid sim I’m using is some lumpy fluid going into a bowl:

Full fluid sim

The simulation is 400 frames long (not all shown here), but that ended up being overkill, I could have got away with a much shorter sim.

Going back to my rules, with particles that live 24 frames I must choose a first frame >= 24 (for this example, I’ll choose 44).
The last frame needs to be after frame 68, so I’m choosing 90.
This makes a loop that is 46 frames long, here it is with no blending:

Looping particle system with no blending

The technique I’m going to use to improve the looping is somewhat like a crossfade.

For this loop from 44 –> 90, I’m modifying the particles in two ways:

  1. Deleting any particles that are spawning after frame 66 (i.e, making sure all particles have died before frame 90)
  2. From frames 66 to 90, copying in all the particles that spawn between frame 20 –> 44.

This guarantees that all the particles that are alive on frame 89 match exactly with frame 44.

To illustrate, this gif shows the unedited loop on the left, and next to it on the right is the loop with no new particles spawned after frame 66 (particles go red on 66):

Particles stopped spawning after frame 66

Next up is the loop unedited on the left, and on the right are the new particles from frames 20 – 44 that I’m merging in from frame 66 onward:

Pre loop particles added to end of the unlooped sim

And now, the unedited loop next to the previous red, green and blue particles combined:

Pre spawn and end spawn particles combined

And finally, just the result of the looping by itself without the point colours:

Final looping particles

One thing that might be slightly confusing about the 2nd gif with the green pre-loop particles is that I’m always spawning particles in the bowl itself to keep the fluid level up, in case you were wondering what that was about 🙂

Setting up the loop

This is the SOPs network that takes the points of a particle sim, and makes it loop:

Full SOPs network for looping particles

The first part of setting up the looping simulation is picking the start and end frames (first and last) of the loop.

I created a subnetwork (FrameFinder, near the top of the above screenshot) that has parameters for two frame numbers, and displays a preview of the two frames you are selecting, so you can find good candidates for looping:

FrameFinder subnetwork preview

The loop setup I chose for the Unity test at the top of the blog was actually a bit different to the range I chose for the breakdowns in the last section.
For Unity, I wanted the shortest looping segment I could, because I didn’t want it to be super expensive (memory wise), so I chose start and end frames 25 frames apart.

You can see that the frames don’t need to match exactly. The main thing I wanted to avoid was having a huge splash in the bowl at the bottom, or over the edge, because that would be hard to make look good in a short loop.

Node parameters

In the screenshot above, you can see that I have First Frame and Last Frame parameters on my FrameFinder network.

I don’t tend to make my blog posts very tutorial-y, but I thought I’d just take the time to mention that you can put parameters on any node in Houdini.

Example:

  • Drop a subnetwork node
  • Right click and select “Parameters and Channels –> Edit Parameter Interface…”:Parameter editing
  • Select a parameter type from the left panel, click the arrow to add the parameter, and set defaults and the parameter name on the right:Edit parameter interface dialog
  • Voila! Happy new parameter:
    floaty

You can then right click on the parameter and copy a reference to it, then use the reference in nodes in the subnetwork, etc.
In the edit parameter interface window, you can also create parameters “From Nodes”, which lets you pick an existing parameter of any node in the sub network to “bubble up” to the top, and it will hook up the references.

If this is new to you, I’d recommend looking up tutorials on Digital Assets (called “HDAs”, used to be called “OTLs”).

I do this all the time on subnetworks like on the FrameFinder, but also to add new fields to a node that contain parts of an expression that would get too big, or that I want to reference from other nodes, etc.

On Wrangles (for example), I find myself adding colour ramp parameters a lot for use within the wrangle code.

FrameFinder

This subnetwork has two outputs: the particles with some Detail Attributes set up, and another output that is a preview of the two frames which I showed before, but here’s what that looks like again:

FrameFinder subnetwork preview

It’s the first time I’ve created multiple outputs from a subnetwork, usually I just dump a “Preview” checkbox as a parameter on the network, but I like this more I think, particularly if I end up turning the whole thing into an HDA.

Here is what the FrameFinder network looks like:

Contents of framefinder subnetwork

In this project, I’m using Detail attributes to pass around a lot of data, and that starts in with the attribcreate_firstAndLastFrame node.

This node creates Detail parameters for each of the frames I chose in the subnet parameters (firstFrame and lastFrame):

Create details attributes for first and last loop frame

Right under the attribCreate node, I’m using two timeshift nodes: one that shifts the simulation to the first chosen frame, and one to the last frame, and then I merge them together (for the preview output). I’ve grouped the lastFrame particles so that I can transform them off to the right to show them side by side, and also giving all the particles in both frames random colours, so it’s a little easier to see their shape.

Time ranges and ages

Back in the main network, straight after the frameFinder subnetwork I have another subnetwork called timeRangesAndAges, which is where I set up all the other Detail attributes I need, here is what is in that subnetwork:

Time Ranges and Ages subnetwork

The nodes in the network box on the right side are used to get the maximum age of any particle in the simulation.
In hindsight, this is rather redundant since I set the max age on the sim myself (you could replace all those nodes with an Attribute Create that makes a maxAge attribute with a value of 24), but I had planned to use it on simulations where particles are killed through interactions, etc 🙂

The first part of that is a solver that works out the max age of any particle in the simulation:

Solver that calculates maximum particle life

For the current frame of particles, it promotes Age from Point to Detail, using Maximum as the promotion method, giving me the maximum age for the current frame.

The solver merges in the previous frame, and then uses an attribute wrangle to get the maximum of the previous frame and current frame value:

@maxAge = max(@maxAge, @opinput1_maxAge);

Right after the solver, I’m timeshifting to the last frame, forcing the solver to run through the entire simulation so that the MaxAge attribute now contains the maximum age of any particle in the simulation (spolier: it’s 24 :P).

I then delete all the points, since all I care about is the detail attribute, and use a Stash node to cache that into the scene. With the points deleted, the node data is < 12 Kb, so the stash is just a convenient way to stop the maxAge recalculating all the time.
If I turn this whole thing into an HDA, I’ll have to rethink that (put “calculate max particle age” behind a button or something).

There are two more wrangle nodes in time ranges and ages.

One of them is a Point wrangle that converts the particle age from seconds into number of frames:

@frameAge = floor(@age/@TimeInc);

And the next is a Detail wrangle that sets up the rest of the detail attributes I use:

// Copy max age from input 1
int maxAge = i@opinput1_maxAge;

// A bunch of helper detail variables referenced later
int loopLength = i@lastFrame - i@firstFrame;
int loopedFrame = (int)(@Frame-1) % (loopLength+1);

i@remappedFrame = i@firstFrame + loopedFrame;

int distanceFromSwap = loopedFrame - loopLength;

i@blendToFrame = max(1, i@remappedFrame - loopLength);
i@numberOfFramesIntoBlend = max(0, maxAge + distanceFromSwap);

When I hit play in the viewport, I want to see just the looping segment of the simulation over and over, so that complicates all this a little.

With that in mind, there are 3 important attributes set up here.

@remappedFrame

If my start frame is 20, for example, and the end frame is 60, I want to see the 20-60 frame segment over and over, hence the wrapping loopedFrame temporary variable.

So if my viewport time slider is on 15, I really want to see frame 34, the value of remappedFrame will be 34. It will always be a number between firstFrame and lastFrame.

@blendToFrame

This takes the remappedFrame, and shifts it back to before the start of the loop.
I only use this value when we hit the last 24 frames of the loop, but I’m clamping it at one just so the timeshift I use later doesn’t freak out.

This attribute will be used in the 2nd part of the technique: combining in pre-loop particles.

@numberOfFramesIntoBlend

When we are getting into the last 24 frames of the loop, this value increases from 0 to 24.
It’s used in the 1st part of the technique to stop spawning particles that have an age less than this value.

Timeshifts and recombining particles

Back to the main network:

Full SOPs network for looping particles

After the timeRangesAndAges node, the network splits: on the left side, I’m timeshifting the simulation using the remappedFrame detail attribute as the frame using this expression:

detail("../timeRangesAndAges/", "remappedFrame", 0)

On the right side I’m time shifting the simulation using the blendToFrame attribute as the frame using this expression:

detail("../timeRangesAndAges/attribwrangle_calcTimeRanges", "blendToFrame", 0)

I’ve colour coded the nodes in the network with the same colours I’ve shown in the gifs in the technique section.

Since I’ve timeshifted the simulation, the detail attributes get time-shifted too.
But I don’t really want that, so I’m attribute transferring the two detail attributes I care about (remappedFrame and numberOfFramesIntoBlend) back onto the remapped sims using Attribute Transfer.

After the attribute transfers, on both sides I’m creating a new point group called inBlendingFrames.

Group expression node for particles in blending range

detail(0, "numberOfFramesIntoBlend", 0) > 0

I probably didn’t need a point group for this, considering every particle is either in or out of this group on a given frame, it just made life easier with the Split node I use on the left.

On the left side, I do a split using inBlendingFrames.
When we’re not in the blending range, we don’t have to do anything to the particles, so that’s the blue colour node.

For both the red and green nodes section, I start with deleting anything not in the inBlendingFrames group.

For the green particles (the pre-loop particles that we’re merging in), we’ve already got the right frame, due to the timeshift up the top.
If we’re on blending frame 2 of the blend (for example), we will still have particles that were spawned 24 frames ago, but we really only want particles that should spawn after the blend starts.
I use an attribute wrangle to clean the older particles up, since I have the frameAge attribute I can use:

if ((@frameAge > i@numberOfFramesIntoBlend))
{
	removepoint(0, @ptnum);
}

Here’s what that looks like for a frame about halfway through the blend.

Pre loop particles with older particles removed

For the red nodes section (where we take the original loop, and delete any particles that start spawning after the blend), I use an attribute wrangle to clean the new particles up:

if (@frameAge < i@numberOfFramesIntoBlend)
{
	removepoint(0, @ptnum);
}

Particle loop end with new particles deleted

So, I merge the red, blue and green particles all together, and we end up with the result I showed in the technique section!

Pre spawn and end spawn particles combined

Here again uncolourised:

Final looping particles

Unity, Alembic and all that jazz

This post is already crazy long, so I’m just going to gloss over the Houdini –> Unity stuff.
If anyone is really interested in those details, I might do another post.

So now that I have a looping particle system, I can use a regular Particle Fluid Surface with default settings, and a polyreduce node to keep the complexity down:

A frame of the looped fluid sim remeshed

I exported the range of frames as an Alembic file, and imported it into Unity with the Alembic plugin.

I threw together a really quick monoBehaviour to play the Alembic stream:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UTJ.Alembic;

[RequireComponent(typeof(AlembicStreamPlayer))]
public class PlayAlembic : MonoBehaviour
{
	public float playSpeed = 0.02f;

	AlembicStreamPlayer sPlayer;

	// Use this for initialization
	void Start ()
	{
		sPlayer = GetComponent();
	}

	// Update is called once per frame
	void Update () 
	{
		sPlayer.currentTime += playSpeed;
		sPlayer.currentTime = sPlayer.currentTime % 1.0f;
	}
}

As a last little thing, I packaged the network up a lot neater, and dumped it in a loop that ran the process on 16 different 46 frame range segments of the original simulation.
The idea being, why try to find good first and last frames when you can just go for a coffee, and come back and have 16 to choose from!

The loops with big splashes definitely don’t work very well (they look like they aren’t looping because lots of particles dying on the same frame), but there are some fun examples in here:

16 different looping segments

Chopped Squabs – Pt 4

 

Last post was about smoke, particles and weird looking ball things.
This post is about the audio for the video.

I wanted to make a sort of windy / underwater muffled sound, and managed to get somewhat close to what I wanted, just using Houdini CHOPs!

Pulse Noise

I decided not to create the audio in the same hip file, since it was already getting a bit busy, and because the data I want to use is already cached out to disk (geometry, pyro sim, etc).

The new hip file just has a small geometry network and a chops network.
Here’s the geometry network:

Pulse amount network

I wanted the audio to peak at times where the pulse was peaking on the tuber bulbs, so the first step was to import the bulbs geometry:

Tuber ball geometry

Next I’m promoting the pulse amount from points to detail, using “sum” as the promotion method (this adds up all the pulse amounts for all points in the bulbs every frame).
I don’t care about the geometry any more, because the sum is a detail attribute, so I delete everything except for a single point.

I had a bit of a hard time working out how to bring the values of a detail attribute into CHOPs as a channel. I think it should be simple to do with a Channel CHOP, but I didn’t succeed at the magic guessing game of syntax for making that work.

Anyway, since importing point positions is easy, I just used an attribute wrangle to copy the pulse sum into the position of my single point:

@P = detail(@opinput1, "pulse");

Audio synthesis fun

I had some idea of what I wanted, and how to make it work, from experiments in other software (Supercollider, PureData, etc).

I found that creating an interesting wind sound could be achieved through feeding noise into a lowpass filter.

I also tried this out in Unity, grabbing the first C# code I could find for filters:
https://stackoverflow.com/questions/8079526/lowpass-and-high-pass-filter-in-c-sharp

Here is a Unity MonoBehaviour I built from that code above:

using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

[RequireComponent(typeof(AudioSource))]
public class PlayerAudio : MonoBehaviour
{
    AudioSource _requiredAudioSource;
    public int _samplerate = 44100;

    private const float _resonance      = 1.0f;
    private float _phase                = 0.0f;
    private float _rampInAmount         = 0.0f;
    private float _frequency            = 800.0f;
    private System.Random _rand1        = new System.Random();
    private float[] _inputHistory       = new float[2];
    private float[] _outputHistory      = new float[3];

    void Start ()
    {
        _requiredAudioSource = GetComponent<AudioSource>();
        AudioClip testClip = AudioClip.Create("Wind", _samplerate * 2, 1, _samplerate, true, OnAudioRead);
        _requiredAudioSource.clip = testClip;
        _requiredAudioSource.loop = true;
        _requiredAudioSource.Play();
    }

    void Update ()
    {
        _rampInAmount = Mathf.Min(_rampInAmount + (Time.deltaTime/2.0f), 1.0f);
    }

    void OnAudioRead(float[] data)
    {
        float c, a1, a2, a3, b1, b2;

        for (int i = 0; i < data.Length; i++)
        {
            // Create a random amplitude value
            double currentRand  = _rand1.NextDouble();
            float amplitudeRand = (float)(currentRand * 2.0f - 1.0f) * _rampInAmount;
            amplitudeRand /= 2.0f;

            // Phase over a few seconds, phase goes from 0 - 2PI, so wrap the value
            float randRange = Mathf.Lerp(-1.0f, 1.0f, (float)currentRand);
            _phase += 1.0f / _samplerate;
            _phase += randRange * 200.0f / _samplerate;
            _phase %= (Mathf.PI * 2.0f);

            float interpolator = (Mathf.Sin(_phase) + 1.0f) / 2.0f;
            _frequency = Mathf.Lerp(100, 200, interpolator);

            // Low pass filter
            c = 1.0f / (float)Math.Tan(Math.PI * _frequency / _samplerate);
            a1 = 1.0f / (1.0f + _resonance * c + c * c);
            a2 = 2f * a1;
            a3 = a1;
            b1 = 2.0f * (1.0f - c * c) * a1;
            b2 = (1.0f - _resonance * c + c * c) * a1;

            float newOutput = a1 * amplitudeRand + a2 * this._inputHistory[0] + a3 * this._inputHistory[1] - b1 * this._outputHistory[0] - b2 * this._outputHistory[1];

            this._inputHistory[1] = this._inputHistory[0];
            this._inputHistory[0] = amplitudeRand;

            this._outputHistory[2] = this._outputHistory[1];
            this._outputHistory[1] = this._outputHistory[0];
            this._outputHistory[0] = newOutput;

            data[i] = newOutput;
        }
    }
}

It’s pretty gross, doesn’t work in webgl, and is probably very expensive.
But if you’re a Unity user it might be fun to throw in a project to check it out 🙂
I take no responsibility for exploded speakers / headphones…

With that running, you can mess with the stereo pan and pitch on the Audio Source, for countless hours of entertainment.

Back to Chops

When testing out Audio in a hip file, I could only get it to play if I use the “Scrub” tab in the audio panel.
You need to point it to a chopnet, and make sure that the chopnet is exporting at least one channel:

Audio panel settings

You should also make sure that the values you output are clamped between -1 and 1, otherwise you’ll get audio popping nastiness.

The CHOP network I’m using to generate the audio looks like this:

Audio generation chopnet

What I’d set up in Unity was a channel of noise that is run through a High Pass filter with a varying Cutoff amount.
I tried to do exactly the same thing in Houdini: Created a noise chop, a Pass Filter chop, and in the Cutoff field sample the value of the noise chop using an expression:

chop("chan1")

I was hoping that the whole waveform would be modified so I could visualize the results in Motion FX View, but what I found instead is that the waveform was modified by the current frames noise value in Motion FX.
With my limited knowledge of CHOPs, it’s hard to describe what I mean by that, so here’s gif of scrubbing frames:

Scrubbing through frames with high pass

It’s likely that it would still have given me the result I wanted, but having the wave form jump around like that, and not being able to properly visualize it was pretty annoying.

So, instead of creating noise to vary the high pass cutoff, I instead created a bunch of noise channels, and gave each of them their own high pass cutoff, then I blend those together (more on that part later).

In my Pass Filter, I created two new parameters (using Edit Parameter Interface) that I refence from a few different nodes:

Custom params on high pass filter

Through trial and error, I found that low cutoff values from 0.3 to 0.6 gave me what I want, so I use an expression to filter each of the channels with a cutoff in that range, based on the channel ID:

ch("baseCutoff") + (fit($C/(ch("passBands")-1), 0, 1, 0.2, 0.5))

The baseCutoff is sort of redundant, it could just be built into the “fit” range, but I was previously using it in other nodes too, and never got around to removing it 🙂

I’m using the “passBands” parameter in that expression, but I also use it in the Channel creation in the noise CHOP:

Noise channel creation from pulse bands param

It probably would have been more sensible just to hard code the number of channels here, and then just count how many channels I have further downstream, but this works fine 🙂

In the Transform tab of the noise, I’m using $C in the X component of the Translate, to offset the noise patterns, so each channel is a bit different. In hindsight, using $C in the “seed” gives a better result.

So now I have some channels of filtered noise!
I’ll keep passBands to 5 for the rest of this post, to keep it easy to visualize:

Filter noise, 5 bands

Combining noise and pulse

The top right part of the CHOP network is importing the pulse value that I set up earlier.

Geometry import to chops

I’m importing a single point, and the pulse value was copied into Position, as mentioned earlier.
I’m deleting the Y and Z channels, since they are the same as X, normalizing the data using Limit, moving the data into the 0-1 range, and then lagging and filtering to soften out the data a bit.

Here is what the pulse looks like before and after the smoothing:

Pulse attribute channel

So this pulse channel is tx0, and I merge that in with all the noise channels (chan1-x).

I want to use the value of this pulse channel to blend in the noise channels I previously created.
So, for example, if I have 4 bands: between 0 and 0.25 pulse value I want to use band 1 noise, between 0.25 and 0.5 I’ll use band 2, 0.5 and 0.75 use band 3, etc.

I didn’t want a hard step between bands, so I’m overlapping them, and in the end I found that having quite a wide blend worked well (I’m always getting a little of all noise channels, with one channel dominant).

This all happens in the vopchop_combineNoiseAndPulse:

vop: Combine Noise And Pulse

The left side of the network is working out the band midpoint ((1 / number of channels) * 0.5 * current channel number).

The middle section gets the current pulse value, and finds how far away that value is from the band midpoint for each channel.

If I just output the value of that fit node, and reduce the overlap, hopefully you can see what this is doing a little more obviously:

Motion view of channel blending

As the red line increases, it goes through the midpoints of each noise band, and the corresponding noise channel increases in intensity.

Since I’ve lowered the overlap for this image, there are actually gaps between bands, so there are certain values of the pulse that will result in no noise at all, but that’s just to make the image clearer.

The rest of the network is multiplying the pulse value with the noise channel value, and I end up with this:

Multiplied noise and pulse value

With that done, I delete the pulse channel, and I add all the noise channels together.

The last thing I’m doing is outputting the noise in stereo, and for that I’m just arbitrarily panning the sound between two channels using noise.

I create two new stereo noise channels together, and then use a vop to flip the second channel:

Noise channel flip

So I end up with this:

Stereo balance waveform

I multiply this back on the original noise:

StereoNoise

There’s a few other little adjustments in there, like making sure the amplitude is between -1 and 1, etc.

Also, I ramp in and out the audio over the first and last 20 frames:

RampInAndOut

And that’s it!

Hopefully you’ve enjoyed this series breaking down the squab video and my adventures in CHOPs. I can’t imagine I’ll use it for audio again, but it was definitely a fun experiment.

Today is Houdini 17 release day, so I can’t wait to dig in, and play around with all the new interesting features, perhaps there will be some vellum sims for future blog posts 🙂

Subsurface Scattering spherical harmonics – pt 3

Welcome to part 3 of this exciting series on how to beat a dead horse.

By the time I got to the end of the work for the last post, I was just about ready to put this project to bed (and by that, I mean P4 obliterate…).

There was just one thing I wanted to fix: The fact that I couldn’t rotate my models!
If I rotate the object, the lighting rotates with it.

Spaaaaaaace

To fix the rotating issue, in the UE4 lighting pass, I need to transform the light vector into the same space that I’m storing the SH data (object space, for example).

RotateSpace

To do that, I need to pass through at least two of those object orientation vectors to the lighting pass (for example, the forward and right vectors of the object).

So, that’s another 6 floats (if I don’t compress them) that I need to pass through, and if you remember from last time, I’d pushed the limits of MRTs with my 16 spherical harmonics coefficients, I don’t have any space left!

This forced me to do one of the other changes I talked about: Use 3 band Spherical Harmonics for my depth values instead of 4 band.
That reduces the coefficients from 16 to 9, and gives me room for my vectors.

<Insert montage of programming and swearing here>

3bandSH

So yay, now I have 3 band SH, and room for sending more things through to lighting.

Quality didn’t really change much, either, and it helped drop down to 5 uv channels, which became very important a little later…

Going off on a tangent

I figured that since I was solving the problem for object orientation, maybe I could also do something for deforming objects too?
For an object where the depth from one side to the other doesn’t change much when it’s deforming, it should be ok to have baked SH data.

The most obvious way to handle that was to calculate and store the SH depth in Tangent space, similar to how Normal maps are usually stored for games.

I wanted to use the same tangent space that UE4 uses, and although Houdini 15 didn’t have anything native for generating that, there is a plugin!

https://github.com/teared/mikktspace-for-houdini

With that compiled and installed, I could plonk down a Compute Tangents node, and now I have Tangents and Binormals stored on each vertex, yay!

At this point, I create a matrix from the Tangent, Binormal and Normal, and store the transpose of that matrix.
Multiplying a vector against it will give me that vector in Tangent space. I got super lazy, and did this in a vertex wrangle:

matrix3 @worldToTangentSpaceMatrix;
vector UE4Tang;
vector UE4Binormal;
vector UE4Normal;

// Tangent U and V are in houdini coords
UE4Tang         = swizzle(v@tangentu, 0,2,1);
UE4Binormal     = swizzle(v@tangentv, 0,2,1);
UE4Normal       = swizzle(@N, 0,2,1);

@worldToTangentSpaceMatrix = transpose(set(UE4Tang, UE4Binormal, UE4Normal));

The swizzle stuff is just swapping Y and Z (coordinate systems are different between UE4 and Houdini).

Viewing the Tangent space data

To make debugging easier, at this point I made a fun little debug node that displays Tangents, Binormals and Normals the same as the model viewer in UE4.

It runs per vertex, and creates new coloured line primitives:

TangentFace

Haven’t bothered cleaning it up much, but hopefully you get the idea:

TangentPrimsVOP.png

And the vectorToPrim subnet:

VectorToPrimsVOP.png

So, add a point, add some length along the input vector and add another point, create a prim, create two verts from the points, set the colour.
I love how easy it is to do this sort of thing in Houdini 🙂

The next step was to modify the existing depth baking code.

For each vertex in the model, I was sending rays out through the model, and storing the depth when they hit the other side.
That mostly stays the same, except that when storing the rays in the SH coefficients, I need to convert them to tangent space first!

HitsToSH.png

Getting animated

Since most of the point of a Tangent space approach was to show a deforming object not looking horrible, I needed an animated model.

I was going to do a bunch of animation in Modo for this, but I realized that transferring all my Houdini custom data to Modo, and then out to fbx might not be such a great idea.

Time for amazing Houdini animation learningz!!
Here’s a beautiful test that any animator would be proud of, rigged in Houdini and dumped out to UE4:

StupidTube.gif

So, I spent some time re-rigging the Vortigaunt in Houdini, and doing some more fairly horrible animation that you can see at the top of this post.

RiggedVort.png

Although the results aren’t great, I found this weirdly soothing.
Perhaps because it gave me a break from trying to debug shaders.

At some point in the future, I would like to do a bit more animation/rigging/skinning.
Then I can have all the animators at work laugh at my crappy art, in addition to all the other artists…

Data out

Hurrah, per-vertex Tangent space Spherical Harmonic depth data now stored on my animated model!

This was about the part where I realized I couldn’t find a way to get the Tangents and Binormals from the Houdini mesh into Unreal…

When exporting with my custom data, what ends up in the fbx is something like this:

   UserDataArray:  {
    UserDataType: "Float"
    UserDataName: "tangentu_x"
    UserData: *37416 {...

When I import that into UE4, it doesn’t know what that custom data is supposed to be.

If I export a mesh out of Modo, though, UE4 imports the Tangents and Binormals fine.
So I jumped over into Modo, and exported out a model with Tangents and Binormals, and had a look at the fbx.
This showed me I needed something more like this:

LayerElementTangent: 0 {
 Version: 102
 Name: "Texture"  
 MappingInformationType: "ByPolygonVertex"
 ReferenceInformationType: "Direct"
 Tangents: *112248 {...
This is probably around about when I should have set the project on fire, and found something better to do with my time but…

C# to the rescue!!

I wrote an incredibly silly little WPF program that reads in a fbx, changes tangentu and tangentv user data into the correct layer elements.

Why WPF you ask?
Seriously, what’s with all the questions? What is this, the Spanish inquisition?
Real answer: Almost any time I’ve written any bit of code for myself in the past 7 years, it’s always a WPF program.
80% of them end up looking like this:
AmazingUI
The code is horrible, I won’t paste it all, but I build a list of all the vectors then pass them through to a function that re-assembles the text and spits it out:
        public string CreateLayerElementBlock(List<Vector3D> pVectors, string pTypeName)
        {
            string newBlock = "";

            int numVectors  = pVectors.Count;
            int numFloats   = pVectors.Count * 3;

            newBlock += "\t\tLayerElement" + pTypeName + ": 0 {\n";
            newBlock += "\t\t\tVersion: 102\n";
            newBlock += "\t\t\tName: \"Texture\"\n";
            newBlock += "\t\t\tMappingInformationType: \"ByPolygonVertex\"\n";
            newBlock += "\t\t\tReferenceInformationType: \"Direct\"\n";
            newBlock += "\t\t\t" + pTypeName + "s: *" + numFloats + " {\n";
            newBlock += "\t\t\t\ta: ";
	...

Gross. Vomit. That’s an afternoon of my life I’ll never get back.
But hey, it worked, so moving on…

UE4 changes

There weren’t many big changes on the UE4 side, just the switching over to 3 band SH, mostly.

One really fun thing bit me in the arse, though.
I’d been testing everything out on my static mesh version of the model.
When I imported the rigged model, I needed to change the material to support it:
UseWithSkeletal
And then the material failed to compile (and UE4 kept crashing)…
So, apparently, skinned meshes use a bunch of the UV coordinate slots for… Stuff!
I needed to switch back to my old approach of storing 6 coefficients in TexCoord1,2 and 3, and the remaining three SH coeffs in vertex colour RGB:
RiggedMatChanges.png
Cropped this down to exclude all the messy stuff I left in for texture based SH data, but those three Appends on the right feed into the material pins I added for SH data in the previous posts.
And yeah, there’s some redundancy in the math at the bottom too, but if you don’t tell anyone, I won’t.

Shader changes

Now to pass the Tangent and Binormal through to the lighting pass.

I ended up compressing these, using Octahedron normal vector encoding, just so I could save a few floats.
The functions to do this ship with UE4, and they allow me to pass 2 floats per vector, rather than x,y,z, and the artifacts are not too bad.
Here’s some more information on how it works:
OctahedronEncoding.png
So now the Tangent and Binormal data is going through to the lighting pass, and I transform the light to tangent space before looking up the SH data:
 float3x3 TangentToWorld =
 {
  GBuffer.WorldTangent,
  GBuffer.WorldBinormal,
  cross(GBuffer.WorldTangent, GBuffer.WorldBinormal),
 };

 float3 TangentL = mul(L, transpose(TangentToWorld));

 float DepthFromPixelToLight  = saturate(GetSH(SHCoeffs, TangentL));
Probably could do that transposing in BassPassPixelShader I guess, and save paying for it on every pixel for every light, but then there’s a lot of things I probably could do. Treat my fellow human beings nicer, drink less beer, not stress myself out with silly home programming projects like this…

Conclusion

If I were to ever do this for real, on an actual game, I’d probably build the SH generation into the import process, or perhaps when doing stuff like baking lighting or generating distance fields in UE4.

If you happened to have a bunch of gbuffer bandwidth (i.e, you had to add gbuffers for something else), and you have a lot of semi translucent things, and engineering time to burn, and no better ideas, I suppose there could be a use for it.
Maybe.

Subsurface Scattering spherical harmonics – pt 2

 

This is my 2nd blog post on using spherical harmonics for depth based lighting effects in Unreal 4.

The first blog post focused on generating the spherical harmonics data in Houdini, this post focuses on the Unreal 4 side of things.

I’m going to avoid posting much code here, but I will try to provide enough information to be useful if you choose to do similar things.

SH data to base pass

The goal was to look up the depth of the object from each light in my scene, and see if I could do something neat with it.

In UE4 deferred rendering, that means that I need to pass my 16 coefficients from the material editor –> base pass pixel shader -> the lighting pass.

First up, I read the first two SH coefficients out of the red and green vertex colour channels, and the rest out of my UV sets (remembering that I kept the default UV set 0 for actual UVs):

SHBaseMatUVs

Vertex colour complications

You notice a nice little hardcoded multiplier up there… This was one of the annoyances with using vertex colours: I needed to scale the value of the coefficients in Houdini to 0-1, because vertex colours are 0-1.

This is different to the normalization part I mentioned in the last blog post, which was scaling the depth values before encoding them in SH. Here, I’m scaling the actual computed coefficients. I only need to do this with the vertex colours, not the UV data, since UVs aren’t restricted to 0-1.

The 4.6 was just a value that worked, using my amazing scientific approach of “calculate SH values for half a dozen models of 1 000 – 10 000 vertices, find out how high and low the final sh values go, divide through by that number +0.1”. You’d be smarter to use actual math to find the maximum range for coefficients for normalized data sets, though… It’s probably something awesome like 0 –> 1.5 pi.

Material input pins

Anyway, those values just plug into the SH Depth Coeff pins, and we’re done!!

Unreal 4 SH depth material

Ok.
That was a lie.
Those pins don’t exist usually… And neither does this shading model:

SHDepthShadingModel

So, that brings me to…

C++ / shader side note

To work out how to add a shading model, I searched the source code for a different shading model (hair I think), and copied and pasted just about everything, and then went through a process of elimination until things worked.
I took very much the same approach to the shader side of things.

This is why I’m a Tech Artist, and not a programmer… Well, one of many reasons 😉
Seriously though, being able to do this is one of the really nice things about having access to engine source code!

The programming side of this project was a bunch of very simple changes across a wide range of engine source files, so I’m not going to post much of it:

P4Lose

There is an awful lot of this code that really should be data instead. But Epic gave me an awesome engine and lets me mess around with source code, so I’m not going to complain too much 😛

Material pins (continued…)

So I added material inputs for the coefficients, plus some absorption parameters.

Sh coeffs

The SH Coeffs material pins are new ones, so I had to make a bunch of changes to material engine source files to make that happen.
Be careful when doing this: Consistent ordering of variables matters in many of these files. I found that out the easy way: Epic put comments in the code about it 🙂

Each of the SH coeffs material inputs is a vector with 4 components, so I need 4 of these to send my 16 coefficients through to the base pass.

Custom data (absorption)

The absorption pins you might have noticed from my material screenshot are passed as “custom data”.
Some of the existing lighting models (subsurface, etc) pass additional data to the base pass (and also through to lighting, but more on that later).

These “custom data” pins can be renamed for different shading models. So you can use these if you’d rather not go crazy adding new pins, and you’re happy with passing through just two extra float values.
Have a look at MaterialGraph.cpp, and GetCustomDataPinName if that sounds like a fun time 🙂

Base pass to lighting

At this point, I’d modified enough code that I could start reading and using my SH values in the base pass.

A good method for testing if the data was valid was using the camera vector to look up the SH depth values. I knew things were working when I got similar results to what I was seeing in Houdini when using the same approach:

BasePassDebug

That’s looking at “Base Color” in the buffer visualizations.

I don’t actually want to do anything with the SH data in the base pass, though, so the next step is to pass the SH data through to the lighting pass.

Crowded Gbuffer

You can have a giant parameter party, and read all sorts of fun data in the base pass.
However, if you want to do per-light stuff, at some point you need to write all that data into a handful of full screen buffers that the lighting pass uses. By the time you get to lighting, you don’t have per object data, just those full screen buffers and your lights.

These gbuffers are lovingly named GBufferA, GBufferB, GBuffer… You get the picture.

You can visualize them in the editor by using the various buffer visualizers, or explicitly using the “vis” command, e.g: “vis gbuffera”:

visGbuffers

There are some other buffers being used (velocity, etc), but these are the ones I care about for now.

I need to pass an extra 16 float values through to lighting, so surely I could just add 4 new gbuffers?

Apparently not, the limit for simultaneous render targets is 8 🙂

I started out by creating 2 new render targets, so that covers half of my SH values, but what to do with the other 8 values?

Attempt 1 – Packing it up

To get this working, there were things that I could sacrifice from the above existing buffers to store my own data.

For example, I rarely use Specular these days, aside from occasionally setting it to a constant, so I could use that for one of my SH values, and just hard code Specular to 1 in my lighting pass.

With this in mind, I overwrote all the things I didn’t think I cared about for stylized translucent meshes:

  • Static lighting
  • Metallic
  • Specular
  • Distance field anything (I think)

Attempt 2 – Go wide!

This wasn’t really ideal. I wasn’t very happy about losing static lighting.

That was about when I realized that although I couldn’t add any more simultaneous render targets, I could change the format of them!

The standard g-buffers are 8 bits per channel, by default. By going 16 bit per channel, I could pack two SH values into each channel, and store all my SH data in my two new g-buffers without the need for overwriting other buffers!

Well, I actually went with PF_A32B32G32R32F, so 32 bits per channel because I’m greedy.

It’s probably worth passing out in horror at the cost of all this at this point: 2 * 128bit buffers is something like 250mb of data. I’m going to talk about this a little later 🙂

Debugging, again

I created a few different procedural test assets in Houdini with low complexity as test cases, including one which I deleted all but one polygon as a final step, so that I could very accurately debug the SH values 🙂

On top of that, I had a hard coded matrix in the shaders that I could use to check, component by component, that I was getting what I expected when passing data from the base pass to lighting, with packing/unpacking, etc:

const static float4x4 shDebugValues = 
{
	0.1, 0.2, 0.3, 0.4,
	0.5, 0.6, 0.7, 0.8,
	0.9, 1.0, 1.1, 1.2,
	1.3, 1.4, 1.5, 1.6
};

It seems like an obvious and silly thing to point out, but it saved me some time 🙂

Here are some of my beautiful procedural test assets (one you might recognize from the video at the start of the post):

Houdini procedural test asset (rock thing)testobject3testobject2testobject1

“PB-nah”, the lazy guide to not getting the most out of my data

Ok, SH data is going through to the lighting pass now!

This is where a really clever graphics programmer could use if for some physically accurate lighting work, proper translucency, etc.

To be honest, I was pleasantly surprised that anything was working at this stage, so I threw in a very un-pbr scattering, and called it a day! 🙂

float3 SubsurfaceSHDepth( FGBufferData GBuffer, float3 L, float3 V, half3 N )
{
	float AbsorptionDistance 	= GBuffer.CustomData.x;
	float AbsorptionPower 		= lerp(4.0f, 16.0f, GBuffer.CustomData.y);

	float DepthFromPixelToLight 	= Get4BandSH(GBuffer.SHCoeffs, L);
	float absorptionClampedDepth 	= saturate(1.0f / AbsorptionDistance * DepthFromPixelToLight);
	float SSSWrap 			= 0.3f;
	float frontFaceFalloff 		= pow(saturate(dot(-N, L) + SSSWrap), 2);

	float Transmittance 		= pow(1 - absorptionClampedDepth, AbsorptionPower);

	Transmittance *= frontFaceFalloff;

	return Transmittance * GBuffer.BaseColor;
}
It’s non view dependent scattering, using the SH depth through the model towards the light, then dampened by the absorption distance.
The effect falls off by face angle away from the light, but I put a wrap factor on that because I like the way it looks.
For all the work I’ve put into this project, probably the least of it went into the actual lighting model, so I’m pretty likely to change that code quite a lot 🙂
What I like about this is that the scattering stays fairly consistent around the model from different angles:
GlowyBitFrontGlowyBitSide
So as horrible and inaccurate and not PBR as this is, it matches what I see in SSS renders in Modo a little better than what I get from standard UE4 SSS.

The End?

Broken things

  • I can’t rotate my translucent models at the moment 😛
  • Shadows don’t really interact with my model properly

I can hopefully solve both of these things fairly easily (store data in tangent space, look at shadowing in other SSS models in UE4), I just need to find the time.
I could actually rotate the SH data, but apparently that’s hundreds of instructions 🙂

Cost and performance

  • 8 uv channels
  • 2 * 128 bit buffers

Not really ideal from a memory point of view.

The obvious optimization here is to drop down to 3 band spherical harmonics.
The quality probably wouldn’t suffer, and that’s 9 coefficients rather than 16, so I could pack them into one of my 128 bit gbuffers instead of two (with one spare coefficient left over that I’d have to figure out).

That would help kill some UV channels, too.

Also, using 32 bit per channel (so 16 bits per sh coeff) is probably overkill. I could swap over to using a uint 16 bits per channel buffer, and pack two coefficients per channel at 8 bits each coeff, and that would halve the memory usage again.

As for performance, presumably evaluating 3 band spherical harmonics would be cheaper than 4 band. Well, especially because then I could swap to using the optimized UE4 functions that already exist for 3 band sh 🙂

Render… Differently?

To get away from needing extra buffers and having a constant overhead, I probably should have tried out the new Forward+ renderer:

https://docs.unrealengine.com/latest/INT/Engine/Performance/ForwardRenderer/

Since you have access to per object data, presumably passing around sh coefficients would also be less painful.
Rendering is not really my strong point, but my buddy Ben Millwood has been nagging me about Forward+ rendering for years (he’s writing his own renderer http://www.lived3d.com/).

There are other alternatives to deferred, or hybrid deferred approaches (like Doom 2016’s clustered forward, or Wolfgang Engels culled visibility buffers) that might have made this easier too.
I very much look forward to the impending not-entirely-deferred future 🙂

Conclusion

I learnt some things about Houdini and UE4, job done!

Not sure if I’ll keep working on this at all, but it might be fun to at least fix the bugs.

 

Factory – pt 2 (magical placeholder land)

Part 2 of: https://geofflester.wordpress.com/2016/02/07/factory-pt-1/

FlowersInDirt

I had to split this post up, so I want to get this out of the way:
You’re going to see a lot of ugly in the post. #Procedural #Placeholder ugly 🙂

This post is mostly about early pipeline setup in Houdini Python, and UE4 c++.

Placeholder plants

For testing purposes, I made 4 instances of #procedural plants using l-systems:

UniqueFlowers

When I say “made”, I mean ripped from my Shangri-La tribute scene, just heavily modified:

https://geofflester.wordpress.com/2015/09/05/rohan-dalvi-shangri-la-themed-procedural-islands/

Like I mention in that post, if you want to learn lots about Houdini, go buy tutorials from Rohan Dalvi.
He has some free ones you can have a run through, but the floating islands series is just fantastic, so just buy it 😛

These plants I exported as FBX, imported into UE4, and gave them a flat vertex colour material, ’cause I ain’t gonna bother with unwrapping placeholder stuff:

UE4Flowers

The placeholder meshes are 4000 triangles each.
Amusingly, when I first brought them in, I hadn’t bothered checking the density, and they were 80 000 + triangles, and the frame rate was at a horrible 25 fps 😛

Houdini –> UE4

So, the 4 unique plants are in UE4. Yay!

But, I want to place thousands of them. It would be smart to use the in-built vegetation tools in UE4, but my purpose behind this post is to find some nice generic ways to get placement data from Houdini to UE4, something that I’ve been planning to do in my old Half Life scene for ages.
So I’m going to used Instanced Static Meshes 🙂

Generating the placements

For now, I’ve gone with a very simple method of placing vegetation: around the edges of my puddles.
It will do for sake of example. So here’s the puddle and vegetation masks in Houdini (vegetation mask on the left, puddle mask on the right):

PuddleAndVegeMask

A couple of layers of noise, and a fit range applied to vertex colours.

I then just scatter a bunch of points on the mask on the left, and then copy flowers onto them, creating a range of random scales and rotations:

FlowersOnMask.png

The node network for that looks like this:

PuttingPointsOnThePlane.png

Not shown here, off to the left, is all the flower setup stuff.
I’ll leave that alone for now, since I don’t know if I’ll be keeping any of that 🙂

The right hand side is the scattering, which can be summarized as:

  • Read ground plane
  • Subdivide and cache out a the super high poly plane
  • Move colour into Vertex data (because I use UVs in the next part, although I don’t really have to do it this way)
  • Read the brick texture as a mask (more on that below)
  • Move mask back to Point data
  • Scatter points on the mask
  • Add ID, Rotation and Scale data to each point
  • Flip YZ axis to match UE4 (could probably do this in Houdini prefs instead)
  • Python all the things out (more on that later)

Brick mask

I mentioned quickly that I read the brick mask as a texture in the above section.
I wanted the plants to mostly grow out of cracks, so I multiplied the mask by the inverted height of the bricks, clamped to a range, using a Point VOP:

BrickTextureOnMask.png

And here’s the network, but I won’t explain that node for node, it’s just a bunch of clamps and fits which I eyeballed until it did what I wanted:

HeightTextureVOP.png

Python all the things out, huh?

Python and I have a special relationship.
It’s my favourite language to use when there aren’t other languages available.

Anyway… I’ve gone with dumping my instance data to XML.
More on that decision later.

Now for some horrible hackyness:


node = hou.pwd()
from lxml import etree as ET

geo = node.geometry()

root = ET.Element("ObjectInstances")

for point in geo.points():
pos         = point.position()
scale       = hou.Point.attribValue(point, 'Scale')
rotation    = hou.Point.attribValue(point, 'Rotation')
scatterID   = "Flower" + repr(hou.Point.attribValue(point, 'ScatterID')+1)

PosString       = repr(pos[0]) + ", " + repr(pos[1]) + ", " + repr(pos[2])
RotString       = repr(rotation)
ScaleString     = repr(scale) + ", " + repr(scale) + ", " + repr(scale)

ET.SubElement(root, scatterID,
Location=PosString,
Rotation=RotString,
Scale=ScaleString)

# Do the export
tree = ET.ElementTree(root)
tree.write("D:/MyDocuments/Unreal Projects/Warehouse/Content/Scenes/HoudiniVegetationPlacement.xml", pretty_print=True)

NOTE: Not sure if it will post this way, but in Preview the tabbing seems to be screwed up, no matter what I do. Luckily, programming languages have block start and end syntax, so this would never be a prob… Oh. Python. Right.

Also, all hail the ugly hard coded path right at the end there 🙂
(Trust me, I’ll dump that into the interface for the node or something, would I lie to you?)

Very simply, this code exports an XML element for each Point.
I’m being very lazy for now, and only exporting Y rotation. I’ll probably fix that later.

This pumps out an XML file that looks like this:

<ObjectInstances>
<Flower1 Location=-236.48265075683594, -51.096923828125, -0.755022406578064 Rotation=(0.0, 230.97622680664062, 0.0) Scale=0.6577988862991333, 0.6577988862991333, 0.6577988862991333/>

</ObjectInstances>

Reading the XML in UE4

In the spirit of slapping things together, I decided to make a plugin that would read the XML file, and then add all the instances to my InstancedStaticMesh components.

First up, I put 4 StaticMeshActors in the scene, and in place I gave them an InstancedStaticMesh component. I could have done this in a Blueprint, but I try to keep Blueprints to a minimum if I don’t actually need them:

InstancedStaticMesh

As stated, I’m a hack, so the StaticMeshActor needs to be named Flower<1..4>, because the code matches the name to what it finds in the XML.

The magic button

I should really implement my code as a either a specialized type of Data Table, or perhaps some sort of new thing called an XMLInstancedStaticMesh, or… Something else clever.

Instead, I made a Magic Button(tm):

MagicButton

XML Object Loader. Probably should have put a cat picture on that, in retrospect.

Brief overview of code

I’m not going to post the full code here for a bunch of reasons, including just that it is pretty unexciting, but the basic outline of it is:

  1. Click the button
  2. The plugin gets all InstancedStaticMeshComponents in the scene
  3. Get a list of all of the Parent Actors for those components, and their labels
  4. Process the XML file, and for each Element:
    • Check if the element matches a name found in step 3
    • If the Actor name hasn’t already been visited, clear the instances on the InstancedStaticMesh component, and mark it as visited
    • Get the position, rotation and scale from the XML element, and add a new instance to the InstancedStaticMesh with that data

And that’s it! I had a bit of messing around, with originally doing Euler –> Quaternion conversion in Houdini instead of C++, and also not realizing that the rotations were in radians, but all in all it only took a hour or two to throw together, in the current very hacky form 🙂

Some useful snippets

The FastXML library in UE4 is great, made life easy:

https://docs.unrealengine.com/latest/INT/API/Runtime/XmlParser/FFastXml/index.html

I just needed to create a new class inheriting from the IFastXmlCallback interface, and implement the Process<x> functions.

I’d create a new instance in ProcessElement, then fill in the actual data in ProcessAttribute.

Adding an instance to an InstancedStaticMeshComponent is as easy as:


SomeStaticMeshComp->AddInstance(FTransform());

And then, in shortened form, updating the instance data:


FTransform InstanceTransform;
_currentStaticMeshComp->GetInstanceTransform(_currentInstanceID, InstanceTransform);

// ...

InstanceTransform.SetLocation(Location);
InstanceTransform.SetRotation(RotationQuaternion);
InstanceTransform.SetScale3D(Scale);

_currentStaticMeshComp->UpdateInstanceTransform(_currentInstanceID, InstanceTransform);

One last dirty detail…

That’s about it for the code side of things.

One thing I didn’t mention earlier: In Houdini, I’m using the placement of the plants to generate out the dirt map mask so I can blend in details around their roots:

DirtRootsMask.png

So when I export out my ground plane, I am putting the Puddles mask into the blue channel of the vertex colours, and the Dirt mask into the red channel of the vertex mask 🙂

Still to come (for vegetation)

So I need to:

  • Make the actual flowers I want
  • Make the roots/dirt/mossy texture that gets blended in under the plants
  • Build more stuff

Why.. O.o

Why not data tables

I’m all about XML.

But a sensible, less code-y way to do this would be to save all your instance data from Houdini into CSV format, bring it in to UE4 as a data table, then use a Construction Script in a blueprint to iterate over the data and add instances to an Instanced Static Mesh.

I like XML as a data format, so I decided it would be more fun to use XML.

Why not Houdini Engine

That’s a good question…

In short:

  • I want to explore similar workflows with Modo replicators at some point, and I should be able to re-use the c++/Python stuff for that
  • Who knows what other DCC tools I’ll want to export instances out of
  • It’s nice to jump into code every now and then. Keeps me honest.
  • I don’t own it currently, and I’ve spent my software budget on Houdini Indie and Modo 901 already 🙂

If you have any questions, feel free to dump them in the comments, I hurried through this one a little since it’s at a half way point without great results to show off yet!

 

 

Random fields

Just a quick update on the random functionality I implemented for the Vector Fields in the last post:


https://geofflester.wordpress.com/2014/10/27/swirly-vector-fields-of-doom/

The random data I was generating was previously overwriting the values imported for the vector field.
I’ve refactored it so that “Random” is a new “Modifier” component, which means it can be blended in with existing vector field data, e.g flow data baked out of Maya, other Modifier components I’ve implemented.

I’ve also set up a really basic blending between two random data seeds, to have a simple animated random vector field (rather than the previous approach).

Less words, more video!!

Towards the end of the video, I turn on the original Motor component, which starts blending with the randomness.

Probably worth mentioning that all components by default can be controlled by blueprints (because UE4 is awesome), so it would probably only take a few hours of a non-programmers time to:

  • Make a gun that shoots grenades that generate vortices in vector fields
  • A button that randomises particle movement in rooms
  • Fans that blow particles around that turn on and off with player interaction
  • players that generate swirling particles around as they move
  • Etc.

    Oh, also, I bumped the number of particles up to 48 000, for giggles.
    I can actually push it up to about 100 000 before it struggles, which is pretty neat for a 4 year old graphics card (560 ti)! 🙂

    Swirly Vector Fields of doom

    C++ and I are friends again, but to be honest UE4 makes that pretty easy.

    I’ve been playing around with the Vector Fields in UE4, which are essentially a 3d grid of forces that can be applied per frame to particles.

    You can’t really author them in UE4, so I wrote some plugin code that lets me randomly generate them (please excuse the crappy VFX):

    Next up, I wanted to be able to change data in Vector Fields on the fly.
    I have plans for a whole bunch of different “Modifiers”, including spherical impulses (explosions, etc).

    For now, I have a basic motor / vortex type thing going on, that can be turned on and off through blueprints, blended in and out over the top of the random data, etc:

    Places I might take this (if I can escape Civilization: Beyond Earth :)) :

  • Sample the vector field data in the Material system, so that I can have particle effects and material flow effects tied together somewhat (water on a surface follows the direction of water particles flowing near it, explosions cause materials and particles to react together, etc)
  • Run a really simple inflow/outflow pressure simulation through the grid to replace the random initialisation I have now. Kinda like what my rather clever buddy Ben Millwood did recently for 2d water flood:
    Ben Millwood’s amazing flow map tool of glory
  • Move the Modifier functionality to GPU (animated vector fields in UE4 already implement some of this).
    I’m still getting pretty decent frame rates in debug builds of the editor, so I’m not sure how much I care about it, except to tick the “yay, compute shaders in UE4” box
  • Rather than just grabbing the vector at the position in world space, interpolate between vectors using wavelet turbulence technique to add high detail flow (Wavelet turbulence). Currently well beyond my understanding, so would take me months, but you never know 😉 At the very least, this would be a cool thing to do when sampling the Vector Field data in a material
  • Honestly, they could probably have a bunch of engineers work on this tech for years and years, and we wouldn’t see an end to the cool things that could be done with it.
    Things I would also like to see, but wouldn’t probably attempt myself:

  • The data structures don’t lend themselves to having a grid across a whole world (octrees instead of a flat array, maybe).
  • Vector Fields don’t currently work with anything but GPU particles (I think, unless I’m missing something). Would be nice to be able to use them with ribbon particles (for smoke, bullet trails, etc), make them work with dynamic objects, cloth, hair, confetti…
  • Extending them to have additional arbitrary channels added to them (pressure, temperature, etc) could be neat.
  • I could go on, but I’ll leave it there for now!

    If nothing else, this has been a relatively painless exercise in implementing custom SceneObjects and Components in UE4 🙂

    Contact!

    As a little side distraction, I started helping my wife out with an application for her photography business here in Toronto:

    http://photographybyangelamcconnell.com/

    She has been using the Contact Sheet script in Photoshop.
    Unfortunately, sometimes there seem to be gaps between the images, even when the Spacing parameters are set correctly.

    Not a problem, I thought. I’ll jump into the javascript, and just make… 12 000 lines! Egads, kill it with fire!!!!

    I have a general rule about programming at home these days that if I can’t open it / debug it immediately in Visual Studio(tm), and it’s more than a few pages of code, I take a long hard think about it.
    And generally go play Civilization 5 instead…

    Anyway… I decided to knock together something very simple in C# for Ange, and here it is:

    Warning: May contain photos of myself displaying extreme business pose! 🙂

     
    It doesn’t have even 1/10th the functionality of the Contact Sheet script, but it suits her purposes, and I can probably add anything she needs easy enough, so I’m pretty happy with this for the roughly 10 hours I’ve spent on it.

    You can have whatever row/column layout you like, and you can randomise / move the images around, and save the contact sheet pages out to .tif files.

    I’m generally not in the mood for working on this sort of thing when I’m at home, but this time around it was quite fun!

    Next blog post, back to Unreal 🙂
    (And maybe Substance Painter. I’ve been having fun with Substance Painter)…

    UE4 – The search for the rainbow paddle pop

    “It is absolutely necessary, for the peace and safety of mankind, that some of earth’s dark, dead corners and unplumbed depths be left alone”
    That quote is from Lovecraft, and was written with programmers in mind, I am convinced.

    I have been playing around with UE4 for a few months now, and this weekend was my first proper delving into the code.

    There’s a few spots in my Half-Life scene where I want running water, and I am toying with the idea of making all my materials dynamically wet-able.
    I’m probably going to try out render-to-texture and compute shader work, which is almost certainly going to take me forever to do (even though I’ve played around with both in my own projects).

    Instead of diving into all that, I thought I’d start off small by adding a new Component that will randomly set the vertex colours in a mesh.

    By pure accident, this effect looks like a rainbow paddle pop. Makes me hungry.

    Here it is: