Houdini looping particles

Looping fluid sim

For a while I’d been planning to look into making looping particle systems in Houdini, but hadn’t found a good excuse to jump in. I don’t really do much VFX related work at the best of times, something I need to do more of in the future ๐Ÿ™‚

Anyway, I was recently chatting with Martin Kepplinger, who is working on Clans Of Reign, and he was looking to do a similar thing!

So begins the looping particle journey…

Technique overview

I won’t go into the fluid sim setup, it doesn’t really matter too much what it is.

There are a few conditions that make my approach work:

  • Particles must have a fixed lifetime
  • The first chosen frame of the simulation must have a lead up number of frames >= the particle lifetime
  • The last frame of the loop must be >= the first frame number + the particle lifetime

I have some ideas about how to get rid of these requirements, but not sure if I’ll get back to that any time soon.

For the example in this post, I am keeping particle lifetime pretty low (0.8 to 1.0 seconds, using a @life attribute on the source particles, so a 24 frame loop).

The fluid sim I’m using is some lumpy fluid going into a bowl:

Full fluid sim

The simulation is 400 frames long (not all shown here), but that ended up being overkill, I could have got away with a much shorter sim.

Going back to my rules, with particles that live 24 frames I must choose a first frame >= 24 (for this example, I’ll choose 44).
The last frame needs to be after frame 68, so I’m choosing 90.
This makes a loop that is 46 frames long, here it is with no blending:

Looping particle system with no blending

The technique I’m going to use to improve the looping is somewhat like a crossfade.

For this loop from 44 –> 90, I’m modifying the particles in two ways:

  1. Deleting any particles that are spawning after frame 66 (i.e, making sure all particles have died before frame 90)
  2. From frames 66 to 90, copying in all the particles that spawn between frame 20 –> 44.

This guarantees that all the particles that are alive on frame 89 match exactly with frame 44.

To illustrate, this gif shows the unedited loop on the left, and next to it on the right is the loop with no new particles spawned after frame 66 (particles go red on 66):

Particles stopped spawning after frame 66

Next up is the loop unedited on the left, and on the right are the new particles from frames 20 – 44 that I’m merging in from frame 66 onward:

Pre loop particles added to end of the unlooped sim

And now, the unedited loop next to the previous red, green and blue particles combined:

Pre spawn and end spawn particles combined

And finally, just the result of the looping by itself without the point colours:

Final looping particles

One thing that might be slightly confusing about the 2nd gif with the green pre-loop particles is that I’m always spawning particles in the bowl itself to keep the fluid level up, in case you were wondering what that was about ๐Ÿ™‚

Setting up the loop

This is the SOPs network that takes the points of a particle sim, and makes it loop:

Full SOPs network for looping particles

The first part of setting up the looping simulation is picking the start and end frames (first and last) of the loop.

I created a subnetwork (FrameFinder, near the top of the above screenshot) that has parameters for two frame numbers, and displays a preview of the two frames you are selecting, so you can find good candidates for looping:

FrameFinder subnetwork preview

The loop setup I chose for the Unity test at the top of the blog was actually a bit different to the range I chose for the breakdowns in the last section.
For Unity, I wanted the shortest looping segment I could, because I didn’t want it to be super expensive (memory wise), so I chose start and end frames 25 frames apart.

You can see that the frames don’t need to match exactly. The main thing I wanted to avoid was having a huge splash in the bowl at the bottom, or over the edge, because that would be hard to make look good in a short loop.

Node parameters

In the screenshot above, you can see that I have First Frame and Last Frame parameters on my FrameFinder network.

I don’t tend to make my blog posts very tutorial-y, but I thought I’d just take the time to mention that you can put parameters on any node in Houdini.

Example:

  • Drop a subnetwork node
  • Right click and select “Parameters and Channels –> Edit Parameter Interface…”:Parameter editing
  • Select a parameter type from the left panel, click the arrow to add the parameter, and set defaults and the parameter name on the right:Edit parameter interface dialog
  • Voila! Happy new parameter:
    floaty

You can then right click on the parameter and copy a reference to it, then use the reference in nodes in the subnetwork, etc.
In the edit parameter interface window, you can also create parameters “From Nodes”, which lets you pick an existing parameter of any node in the sub network to “bubble up” to the top, and it will hook up the references.

If this is new to you, I’d recommend looking up tutorials on Digital Assets (called “HDAs”, used to be called “OTLs”).

I do this all the time on subnetworks like on the FrameFinder, but also to add new fields to a node that contain parts of an expression that would get too big, or that I want to reference from other nodes, etc.

On Wrangles (for example), I find myself adding colour ramp parameters a lot for use within the wrangle code.

FrameFinder

This subnetwork has two outputs: the particles with some Detail Attributes set up, and another output that is a preview of the two frames which I showed before, but here’s what that looks like again:

FrameFinder subnetwork preview

It’s the first time I’ve created multiple outputs from a subnetwork, usually I just dump a “Preview” checkbox as a parameter on the network, but I like this more I think, particularly if I end up turning the whole thing into an HDA.

Here is what the FrameFinder network looks like:

Contents of framefinder subnetwork

In this project, I’m using Detail attributes to pass around a lot of data, and that starts in with the attribcreate_firstAndLastFrame node.

This node creates Detail parameters for each of the frames I chose in the subnet parameters (firstFrame and lastFrame):

Create details attributes for first and last loop frame

Right under the attribCreate node, I’m using two timeshift nodes: one that shifts the simulation to the first chosen frame, and one to the last frame, and then I merge them together (for the preview output). I’ve grouped the lastFrame particles so that I can transform them off to the right to show them side by side, and also giving all the particles in both frames random colours, so it’s a little easier to see their shape.

Time ranges and ages

Back in the main network, straight after the frameFinder subnetwork I have another subnetwork called timeRangesAndAges, which is where I set up all the other Detail attributes I need, here is what is in that subnetwork:

Time Ranges and Ages subnetwork

The nodes in the network box on the right side are used to get the maximum age of any particle in the simulation.
In hindsight, this is rather redundant since I set the max age on the sim myself (you could replace all those nodes with an Attribute Create that makes a maxAge attribute with a value of 24), but I had planned to use it on simulations where particles are killed through interactions, etc ๐Ÿ™‚

The first part of that is a solver that works out the max age of any particle in the simulation:

Solver that calculates maximum particle life

For the current frame of particles, it promotes Age from Point to Detail, using Maximum as the promotion method, giving me the maximum age for the current frame.

The solver merges in the previous frame, and then uses an attribute wrangle to get the maximum of the previous frame and current frame value:

@maxAge = max(@maxAge, @opinput1_maxAge);

Right after the solver, I’m timeshifting to the last frame, forcing the solver to run through the entire simulation so that the MaxAge attribute now contains the maximum age of any particle in the simulation (spolier: it’s 24 :P).

I then delete all the points, since all I care about is the detail attribute, and use a Stash node to cache that into the scene. With the points deleted, the node data is < 12 Kb, so the stash is just a convenient way to stop the maxAge recalculating all the time.
If I turn this whole thing into an HDA, I’ll have to rethink that (put “calculate max particle age” behind a button or something).

There are two more wrangle nodes in time ranges and ages.

One of them is a Point wrangle that converts the particle age from seconds into number of frames:

@frameAge = floor(@age/@TimeInc);

And the next is a Detail wrangle that sets up the rest of the detail attributes I use:

// Copy max age from input 1
int maxAge = i@opinput1_maxAge;

// A bunch of helper detail variables referenced later
int loopLength = i@lastFrame - i@firstFrame;
int loopedFrame = (int)(@Frame-1) % (loopLength+1);

i@remappedFrame = i@firstFrame + loopedFrame;

int distanceFromSwap = loopedFrame - loopLength;

i@blendToFrame = max(1, i@remappedFrame - loopLength);
i@numberOfFramesIntoBlend = max(0, maxAge + distanceFromSwap);

When I hit play in the viewport, I want to see just the looping segment of the simulation over and over, so that complicates all this a little.

With that in mind,ย there are 3 important attributes set up here.

@remappedFrame

If my start frame is 20, for example, and the end frame is 60, I want to see the 20-60 frame segment over and over, hence the wrapping loopedFrame temporary variable.

So if my viewport time slider is on 15, I really want to see frame 34, the value of remappedFrame will be 34. It will always be a number between firstFrame and lastFrame.

@blendToFrame

This takes the remappedFrame, and shifts it back to before the start of the loop.
I only use this value when we hit the last 24 frames of the loop, but I’m clamping it at one just so the timeshift I use later doesn’t freak out.

This attribute will be used in the 2nd part of the technique: combining in pre-loop particles.

@numberOfFramesIntoBlend

When we are getting into the last 24 frames of the loop, this value increases from 0 to 24.
It’s used in the 1st part of the technique to stop spawning particles that have an age less than this value.

Timeshifts and recombining particles

Back to the main network:

Full SOPs network for looping particles

After the timeRangesAndAges node, the network splits: on the left side, I’m timeshifting the simulation using the remappedFrame detail attribute as the frame using this expression:

detail("../timeRangesAndAges/", "remappedFrame", 0)

On the right side I’m time shifting the simulation using the blendToFrame attribute as the frame using this expression:

detail("../timeRangesAndAges/attribwrangle_calcTimeRanges", "blendToFrame", 0)

I’ve colour coded the nodes in the network with the same colours I’ve shown in the gifs in the technique section.

Since I’ve timeshifted the simulation, the detail attributes get time-shifted too.
But I don’t really want that, so I’m attribute transferring the two detail attributes I care about (remappedFrame and numberOfFramesIntoBlend) back onto the remapped sims using Attribute Transfer.

After the attribute transfers, on both sides I’m creating a new point group called inBlendingFrames.

Group expression node for particles in blending range

detail(0, "numberOfFramesIntoBlend", 0) > 0

I probably didn’t need a point group for this, considering every particle is either in or out of this group on a given frame, it just made life easier with the Split node I use on the left.

On the left side, I do a split using inBlendingFrames.
When we’re not in the blending range, we don’t have to do anything to the particles, so that’s the blue colour node.

For both the red and green nodes section, I start with deleting anything not in the inBlendingFrames group.

For the green particles (the pre-loop particles that we’re merging in), we’ve already got the right frame, due to the timeshift up the top.
If we’re on blending frame 2 of the blend (for example), we will still have particles that were spawned 24 frames ago, but we really only want particles that should spawn after the blend starts.
I use an attribute wrangle to clean the older particles up, since I have the frameAge attribute I can use:

if ((@frameAge > i@numberOfFramesIntoBlend))
{
	removepoint(0, @ptnum);
}

Here’s what that looks like for a frame about halfway through the blend.

Pre loop particles with older particles removed

For the red nodes section (where we take the original loop, and delete any particles that start spawning after the blend), I use an attribute wrangle to clean the new particles up:

if (@frameAge < i@numberOfFramesIntoBlend)
{
	removepoint(0, @ptnum);
}

Particle loop end with new particles deleted

So, I merge the red, blue and green particles all together, and we end up with the result I showed in the technique section!

Pre spawn and end spawn particles combined

Here again uncolourised:

Final looping particles

Unity, Alembic and all that jazz

This post is already crazy long, so I’m just going to gloss over the Houdini –> Unity stuff.
If anyone is really interested in those details, I might do another post.

So now that I have a looping particle system, I can use a regular Particle Fluid Surface with default settings, and a polyreduce node to keep the complexity down:

A frame of the looped fluid sim remeshed

I exported the range of frames as an Alembic file, and imported it into Unity with the Alembic plugin.

I threw together a really quick monoBehaviour to play the Alembic stream:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UTJ.Alembic;

[RequireComponent(typeof(AlembicStreamPlayer))]
public class PlayAlembic : MonoBehaviour
{
	public float playSpeed = 0.02f;

	AlembicStreamPlayer sPlayer;

	// Use this for initialization
	void Start ()
	{
		sPlayer = GetComponent();
	}

	// Update is called once per frame
	void Update () 
	{
		sPlayer.currentTime += playSpeed;
		sPlayer.currentTime = sPlayer.currentTime % 1.0f;
	}
}

As a last little thing, I packaged the network up a lot neater, and dumped it in a loop that ran the process on 16 different 46 frame range segments of the original simulation.
The idea being, why try to find good first and last frames when you can just go for a coffee, and come back and have 16 to choose from!

The loops with big splashes definitely don’t work very well (they look like they aren’t looping because lots of particles dying on the same frame), but there are some fun examples in here:

16 different looping segments

Chopped Squabs โ€“ Pt 4

 

Last post was about smoke, particles and weird looking ball things.
This post is about the audio for the video.

I wanted to make a sort of windy / underwater muffled sound, and managed to get somewhat close to what I wanted, just using Houdini CHOPs!

Pulse Noise

I decided not to create the audio in the same hip file, since it was already getting a bit busy, and because the data I want to use is already cached out to disk (geometry, pyro sim, etc).

The new hip file just has a small geometry network and a chops network.
Here’s the geometry network:

Pulse amount network

I wanted the audio to peak at times where the pulse was peaking on the tuber bulbs, so the first step was to import the bulbs geometry:

Tuber ball geometry

Next I’m promoting the pulse amount from points to detail, using “sum” as the promotion method (this adds up all the pulse amounts for all points in the bulbs every frame).
I don’t care about the geometry any more, because the sum is a detail attribute, so I delete everything except for a single point.

I had a bit of a hard time working out how to bring the values of a detail attribute into CHOPs as a channel. I think it should be simple to do with a Channel CHOP, but I didn’t succeed at the magic guessing game of syntax for making that work.

Anyway, since importing point positions is easy, I just used an attribute wrangle to copy the pulse sum into the position of my single point:

@P = detail(@opinput1, "pulse");

Audio synthesis fun

I had some idea of what I wanted, and how to make it work, from experiments in other software (Supercollider, PureData, etc).

I found that creating an interesting wind sound could be achieved through feeding noise into a lowpass filter.

I also tried this out in Unity, grabbing the first C# code I could find for filters:
https://stackoverflow.com/questions/8079526/lowpass-and-high-pass-filter-in-c-sharp

Here is a Unity MonoBehaviour I built from that code above:

using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

[RequireComponent(typeof(AudioSource))]
public class PlayerAudio : MonoBehaviour
{
    AudioSource _requiredAudioSource;
    public int _samplerate = 44100;

    private const float _resonance      = 1.0f;
    private float _phase                = 0.0f;
    private float _rampInAmount         = 0.0f;
    private float _frequency            = 800.0f;
    private System.Random _rand1        = new System.Random();
    private float[] _inputHistory       = new float[2];
    private float[] _outputHistory      = new float[3];

    void Start ()
    {
        _requiredAudioSource = GetComponent<AudioSource>();
        AudioClip testClip = AudioClip.Create("Wind", _samplerate * 2, 1, _samplerate, true, OnAudioRead);
        _requiredAudioSource.clip = testClip;
        _requiredAudioSource.loop = true;
        _requiredAudioSource.Play();
    }

    void Update ()
    {
        _rampInAmount = Mathf.Min(_rampInAmount + (Time.deltaTime/2.0f), 1.0f);
    }

    void OnAudioRead(float[] data)
    {
        float c, a1, a2, a3, b1, b2;

        for (int i = 0; i < data.Length; i++)
        {
            // Create a random amplitude value
            double currentRand  = _rand1.NextDouble();
            float amplitudeRand = (float)(currentRand * 2.0f - 1.0f) * _rampInAmount;
            amplitudeRand /= 2.0f;

            // Phase over a few seconds, phase goes from 0 - 2PI, so wrap the value
            float randRange = Mathf.Lerp(-1.0f, 1.0f, (float)currentRand);
            _phase += 1.0f / _samplerate;
            _phase += randRange * 200.0f / _samplerate;
            _phase %= (Mathf.PI * 2.0f);

            float interpolator = (Mathf.Sin(_phase) + 1.0f) / 2.0f;
            _frequency = Mathf.Lerp(100, 200, interpolator);

            // Low pass filter
            c = 1.0f / (float)Math.Tan(Math.PI * _frequency / _samplerate);
            a1 = 1.0f / (1.0f + _resonance * c + c * c);
            a2 = 2f * a1;
            a3 = a1;
            b1 = 2.0f * (1.0f - c * c) * a1;
            b2 = (1.0f - _resonance * c + c * c) * a1;

            float newOutput = a1 * amplitudeRand + a2 * this._inputHistory[0] + a3 * this._inputHistory[1] - b1 * this._outputHistory[0] - b2 * this._outputHistory[1];

            this._inputHistory[1] = this._inputHistory[0];
            this._inputHistory[0] = amplitudeRand;

            this._outputHistory[2] = this._outputHistory[1];
            this._outputHistory[1] = this._outputHistory[0];
            this._outputHistory[0] = newOutput;

            data[i] = newOutput;
        }
    }
}

It’s pretty gross, doesn’t work in webgl, and is probably very expensive.
But if you’re a Unity user it might be fun to throw in a project to check it out ๐Ÿ™‚
I take no responsibility for exploded speakers / headphones…

With that running, you can mess with the stereo pan and pitch on the Audio Source, for countless hours of entertainment.

Back to Chops

When testing out Audio in a hip file, I could only get it to play if I use the “Scrub” tab in the audio panel.
You need to point it to a chopnet, and make sure that the chopnet is exporting at least one channel:

Audio panel settings

You should also make sure that the values you output are clamped between -1 and 1, otherwise you’ll get audio popping nastiness.

The CHOP network I’m using to generate the audio looks like this:

Audio generation chopnet

What I’d set up in Unity was a channel of noise that is run through a High Pass filter with a varying Cutoff amount.
I tried to do exactly the same thing in Houdini: Created a noise chop, a Pass Filter chop, and in the Cutoff field sample the value of the noise chop using an expression:

chop("chan1")

I was hoping that the whole waveform would be modified so I could visualize the results in Motion FX View, but what I found instead is that the waveform was modified by the current frames noise value in Motion FX.
With my limited knowledge of CHOPs, it’s hard to describe what I mean by that, so here’s gif of scrubbing frames:

Scrubbing through frames with high pass

It’s likely that it would still have given me the result I wanted, but having the wave form jump around like that, and not being able to properly visualize it was pretty annoying.

So, instead of creating noise to vary the high pass cutoff, I instead created a bunch of noise channels, and gave each of them their own high pass cutoff, then I blend those together (more on that part later).

In my Pass Filter, I created two new parameters (using Edit Parameter Interface) that I refence from a few different nodes:

Custom params on high pass filter

Through trial and error, I found that low cutoff values from 0.3 to 0.6 gave me what I want, so I use an expression to filter each of the channels with a cutoff in that range, based on the channel ID:

ch("baseCutoff") + (fit($C/(ch("passBands")-1), 0, 1, 0.2, 0.5))

The baseCutoff is sort of redundant, it could just be built into the “fit” range, but I was previously using it in other nodes too, and never got around to removing it ๐Ÿ™‚

I’m using the “passBands” parameter in that expression, but I also use it in the Channel creation in the noise CHOP:

Noise channel creation from pulse bands param

It probably would have been more sensible just to hard code the number of channels here, and then just count how many channels I have further downstream, but this works fine ๐Ÿ™‚

In the Transform tab of the noise, I’m using $C in the X component of the Translate, to offset the noise patterns, so each channel is a bit different. In hindsight, using $C in the “seed” gives a better result.

So now I have some channels of filtered noise!
I’ll keep passBands to 5 for the rest of this post, to keep it easy to visualize:

Filter noise, 5 bands

Combining noise and pulse

The top right part of the CHOP network is importing the pulse value that I set up earlier.

Geometry import to chops

I’m importing a single point, and the pulse value was copied into Position, as mentioned earlier.
I’m deleting the Y and Z channels, since they are the same as X, normalizing the data using Limit, moving the data into the 0-1 range, and then lagging and filtering to soften out the data a bit.

Here is what the pulse looks like before and after the smoothing:

Pulse attribute channel

So this pulse channel is tx0, and I merge that in with all the noise channels (chan1-x).

I want to use the value of this pulse channel to blend in the noise channels I previously created.
So, for example, if I have 4 bands: between 0 and 0.25 pulse value I want to use band 1 noise, between 0.25 and 0.5 I’ll use band 2, 0.5 and 0.75 use band 3, etc.

I didn’t want a hard step between bands, so I’m overlapping them, and in the end I found that having quite a wide blend worked well (I’m always getting a little of all noise channels, with one channel dominant).

This all happens in the vopchop_combineNoiseAndPulse:

vop: Combine Noise And Pulse

The left side of the network is working out the band midpoint ((1 / number of channels) * 0.5 * current channel number).

The middle section gets the current pulse value, and finds how far away that value is from the band midpoint for each channel.

If I just output the value of that fit node, and reduce the overlap, hopefully you can see what this is doing a little more obviously:

Motion view of channel blending

As the red line increases, it goes through the midpoints of each noise band, and the corresponding noise channel increases in intensity.

Since I’ve lowered the overlap for this image, there are actually gaps between bands, so there are certain values of the pulse that will result in no noise at all, but that’s just to make the image clearer.

The rest of the network is multiplying the pulse value with the noise channel value, and I end up with this:

Multiplied noise and pulse value

With that done, I delete the pulse channel, and I add all the noise channels together.

The last thing I’m doing is outputting the noise in stereo, and for that I’m just arbitrarily panning the sound between two channels using noise.

I create two new stereo noise channels together, and then use a vop to flip the second channel:

Noise channel flip

So I end up with this:

Stereo balance waveform

I multiply this back on the original noise:

StereoNoise

There’s a few other little adjustments in there, like making sure the amplitude is between -1 and 1, etc.

Also, I ramp in and out the audio over the first and last 20 frames:

RampInAndOut

And that’s it!

Hopefully you’ve enjoyed this series breaking down the squab video and my adventures in CHOPs. I can’t imagine I’ll use it for audio again, but it was definitely a fun experiment.

Today is Houdini 17 release day, so I can’t wait to dig in, and play around with all the new interesting features, perhaps there will be some vellum sims for future blog posts ๐Ÿ™‚

Crumbling tiger, hidden canyon

In the world of Shangri-La, Far Cry 4, lots of things crumble and dissolve into powder.

For example, my buddy Tiges here:

TigesAnim.gif

Or, asย Max Scoville hilariously put it, the tiger turns into cocaine… NSFW I guess… That gave me a huge chuckle when I first saw it.

I was not responsible for the lovely powder effects in the game, we had a crack team (see what I did there) of Tricia Penman, Craig Alguire and John Lee for all that fancy stuff.
The VFX were such a huge part of the visuals Shangri-La, the team did an incredible job.

What I needed to work out was a decent enough way of getting the tiger body to dissolve away.
Nothing too fancy, since it happens very quickly for the most part.

Prototype

I threw together a quickย prototype in Unity3d using some of the included library content from Modo:

unitydude.gif

I’m just using a painted greyscale mask as the alpha, then thresholding through it (like using the Threshold adjustment layer in Photoshop, basically).

There’s a falloff around the edge of the alpha, and I’m applying a scrolling tiled firey texture in that area.

I won’t go into it too much, as it’s a technique as old as time itself, and there are lots of great tutorials out there on how to set it up in Unity / UE4, etc.

As it turns out, there was already some poster burning tech that I could use, and it worked almost exactly the same way, so I didn’t need to do the shader work in the end:

You mentioned canyons?

I actually used World Machine to create the detail in the maps.
In the end, I needed to make about 30 greyscale maps for the dissolving effects on various assets.

Workflow

I’ll use my Vortigaunt fellow as an example, since I’ve been slack at using him for anything else or finishing him (typical!).

First up, for most of the assets, I painted a very rough greyscale mask in Modo:

VortigauntRoughMask

Then, I take that into World Machine, and use it as a height map.
And run erosion on it:

WMVort

I then take the flow map out of World Machine, and back into Photoshop.
Overlay the flow map on top of the original rough greyscale mask, add a bit of noise to it.
With a quick setup in UE4, I have something like this:

Vortigone

Sure, doesn’t look amazing, but for ten minutes work it is what it is ๐Ÿ™‚

You could spend more time painting the mask on some of them (which I did for the more important ones), but in the end, you only see it for a few dozen frames, so many of them I left exactly how they are.

Better-er, more automated, etc

Now that I have the Houdini bug, I would probably generate the rough mask in Houdini rather than painting it.

I.e:

  • Set the colour for the vertices I want the fade to start at
  • Use a solver to spread the values out from these vertices each frame (or do it in texture space, maybe).
  • Give the spread some variation based off the roughness and normals of the surface (maybe).
  • Maybe do the “erosion” stuff in Houdini as well, since it doesn’t really need to be erosion, just some arbitrary added stringy detail.

Again, though, not worth spending too much time on it for such a simple effect.
A better thing to explore would be trying to fill the interior of the objects with some sort of volumetric effect, or some such ๐Ÿ™‚
(Which is usually where I’d go talk to a graphics programmer)

Other Examples

I ended up doing this for almost all of the characters, which exception of a few specific ones (SPOILERS), like the giant chicken that you fight.
That one, and a few others, were handled by Nils Meyer and Steve Fabok, from memory.

So aside from those, and my mate Tiges up there, here’s a few other examples.

Bell Chains

BellAnim

Hard to see, but the chain links fade out 1 by 1, starting from the bottom.

This was tricky, because the particular material we were using didn’t support 2 UV channels, and the chain links are all mapped to the same texture space (which makes total sense).

Luckily, the material *did* support changing UV tiling for the Mask vs the other textures.

So we could stack all of the UV shells of the links on top of each other in UV space, like so:

ChainUVs

So then the mask fades from 0 –> 1 in V.
In the material, if we had 15 links, then we need to tile V 15 times for Diffuse, Normal, Roughness, etc, leaving the mask texture tiled once.

Edwin Chan was working on the assets for these, and I could have just made him manually set that up in Max, but it would have been a bit of a pain, and I’d already asked him to do all sorts of annoying setup on the prayer wheels…

There were 3-4 different bell chain setups, and each of those had multiple LODs for each platform, so I wrote a Maxscript that would pack all the UVs into the correct range.

Quite a lot of work for such a quick effect, but originally the timing was a lot slower, so at that point it was worth it ๐Ÿ™‚

Bow gems

BowAnim

Although I never really got this as on-concept as I would have liked, I’m pretty happy with how these turned out.

Amusingly, the emissive material didn’t support the alpha thresh-holding effect.

So there are two layers of mesh: the glowy one and the non-glowy one.
It’s actually the non-glowy layer that fades out!
The glowy stuff is always there, slightly smaller, hidden below the surface.

Dodgy, but got the job done ๐Ÿ˜›