Chopped Squabs – Pt 4

 

Last post was about smoke, particles and weird looking ball things.
This post is about the audio for the video.

I wanted to make a sort of windy / underwater muffled sound, and managed to get somewhat close to what I wanted, just using Houdini CHOPs!

Pulse Noise

I decided not to create the audio in the same hip file, since it was already getting a bit busy, and because the data I want to use is already cached out to disk (geometry, pyro sim, etc).

The new hip file just has a small geometry network and a chops network.
Here’s the geometry network:

Pulse amount network

I wanted the audio to peak at times where the pulse was peaking on the tuber bulbs, so the first step was to import the bulbs geometry:

Tuber ball geometry

Next I’m promoting the pulse amount from points to detail, using “sum” as the promotion method (this adds up all the pulse amounts for all points in the bulbs every frame).
I don’t care about the geometry any more, because the sum is a detail attribute, so I delete everything except for a single point.

I had a bit of a hard time working out how to bring the values of a detail attribute into CHOPs as a channel. I think it should be simple to do with a Channel CHOP, but I didn’t succeed at the magic guessing game of syntax for making that work.

Anyway, since importing point positions is easy, I just used an attribute wrangle to copy the pulse sum into the position of my single point:

@P = detail(@opinput1, "pulse");

Audio synthesis fun

I had some idea of what I wanted, and how to make it work, from experiments in other software (Supercollider, PureData, etc).

I found that creating an interesting wind sound could be achieved through feeding noise into a lowpass filter.

I also tried this out in Unity, grabbing the first C# code I could find for filters:
https://stackoverflow.com/questions/8079526/lowpass-and-high-pass-filter-in-c-sharp

Here is a Unity MonoBehaviour I built from that code above:

using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

[RequireComponent(typeof(AudioSource))]
public class PlayerAudio : MonoBehaviour
{
    AudioSource _requiredAudioSource;
    public int _samplerate = 44100;

    private const float _resonance      = 1.0f;
    private float _phase                = 0.0f;
    private float _rampInAmount         = 0.0f;
    private float _frequency            = 800.0f;
    private System.Random _rand1        = new System.Random();
    private float[] _inputHistory       = new float[2];
    private float[] _outputHistory      = new float[3];

    void Start ()
    {
        _requiredAudioSource = GetComponent<AudioSource>();
        AudioClip testClip = AudioClip.Create("Wind", _samplerate * 2, 1, _samplerate, true, OnAudioRead);
        _requiredAudioSource.clip = testClip;
        _requiredAudioSource.loop = true;
        _requiredAudioSource.Play();
    }

    void Update ()
    {
        _rampInAmount = Mathf.Min(_rampInAmount + (Time.deltaTime/2.0f), 1.0f);
    }

    void OnAudioRead(float[] data)
    {
        float c, a1, a2, a3, b1, b2;

        for (int i = 0; i < data.Length; i++)
        {
            // Create a random amplitude value
            double currentRand  = _rand1.NextDouble();
            float amplitudeRand = (float)(currentRand * 2.0f - 1.0f) * _rampInAmount;
            amplitudeRand /= 2.0f;

            // Phase over a few seconds, phase goes from 0 - 2PI, so wrap the value
            float randRange = Mathf.Lerp(-1.0f, 1.0f, (float)currentRand);
            _phase += 1.0f / _samplerate;
            _phase += randRange * 200.0f / _samplerate;
            _phase %= (Mathf.PI * 2.0f);

            float interpolator = (Mathf.Sin(_phase) + 1.0f) / 2.0f;
            _frequency = Mathf.Lerp(100, 200, interpolator);

            // Low pass filter
            c = 1.0f / (float)Math.Tan(Math.PI * _frequency / _samplerate);
            a1 = 1.0f / (1.0f + _resonance * c + c * c);
            a2 = 2f * a1;
            a3 = a1;
            b1 = 2.0f * (1.0f - c * c) * a1;
            b2 = (1.0f - _resonance * c + c * c) * a1;

            float newOutput = a1 * amplitudeRand + a2 * this._inputHistory[0] + a3 * this._inputHistory[1] - b1 * this._outputHistory[0] - b2 * this._outputHistory[1];

            this._inputHistory[1] = this._inputHistory[0];
            this._inputHistory[0] = amplitudeRand;

            this._outputHistory[2] = this._outputHistory[1];
            this._outputHistory[1] = this._outputHistory[0];
            this._outputHistory[0] = newOutput;

            data[i] = newOutput;
        }
    }
}

It’s pretty gross, doesn’t work in webgl, and is probably very expensive.
But if you’re a Unity user it might be fun to throw in a project to check it out πŸ™‚
I take no responsibility for exploded speakers / headphones…

With that running, you can mess with the stereo pan and pitch on the Audio Source, for countless hours of entertainment.

Back to Chops

When testing out Audio in a hip file, I could only get it to play if I use the “Scrub” tab in the audio panel.
You need to point it to a chopnet, and make sure that the chopnet is exporting at least one channel:

Audio panel settings

You should also make sure that the values you output are clamped between -1 and 1, otherwise you’ll get audio popping nastiness.

The CHOP network I’m using to generate the audio looks like this:

Audio generation chopnet

What I’d set up in Unity was a channel of noise that is run through a High Pass filter with a varying Cutoff amount.
I tried to do exactly the same thing in Houdini: Created a noise chop, a Pass Filter chop, and in the Cutoff field sample the value of the noise chop using an expression:

chop("chan1")

I was hoping that the whole waveform would be modified so I could visualize the results in Motion FX View, but what I found instead is that the waveform was modified by the current frames noise value in Motion FX.
With my limited knowledge of CHOPs, it’s hard to describe what I mean by that, so here’s gif of scrubbing frames:

Scrubbing through frames with high pass

It’s likely that it would still have given me the result I wanted, but having the wave form jump around like that, and not being able to properly visualize it was pretty annoying.

So, instead of creating noise to vary the high pass cutoff, I instead created a bunch of noise channels, and gave each of them their own high pass cutoff, then I blend those together (more on that part later).

In my Pass Filter, I created two new parameters (using Edit Parameter Interface) that I refence from a few different nodes:

Custom params on high pass filter

Through trial and error, I found that low cutoff values from 0.3 to 0.6 gave me what I want, so I use an expression to filter each of the channels with a cutoff in that range, based on the channel ID:

ch("baseCutoff") + (fit($C/(ch("passBands")-1), 0, 1, 0.2, 0.5))

The baseCutoff is sort of redundant, it could just be built into the “fit” range, but I was previously using it in other nodes too, and never got around to removing it πŸ™‚

I’m using the “passBands” parameter in that expression, but I also use it in the Channel creation in the noise CHOP:

Noise channel creation from pulse bands param

It probably would have been more sensible just to hard code the number of channels here, and then just count how many channels I have further downstream, but this works fine πŸ™‚

In the Transform tab of the noise, I’m using $C in the X component of the Translate, to offset the noise patterns, so each channel is a bit different. In hindsight, using $C in the “seed” gives a better result.

So now I have some channels of filtered noise!
I’ll keep passBands to 5 for the rest of this post, to keep it easy to visualize:

Filter noise, 5 bands

Combining noise and pulse

The top right part of the CHOP network is importing the pulse value that I set up earlier.

Geometry import to chops

I’m importing a single point, and the pulse value was copied into Position, as mentioned earlier.
I’m deleting the Y and Z channels, since they are the same as X, normalizing the data using Limit, moving the data into the 0-1 range, and then lagging and filtering to soften out the data a bit.

Here is what the pulse looks like before and after the smoothing:

Pulse attribute channel

So this pulse channel is tx0, and I merge that in with all the noise channels (chan1-x).

I want to use the value of this pulse channel to blend in the noise channels I previously created.
So, for example, if I have 4 bands: between 0 and 0.25 pulse value I want to use band 1 noise, between 0.25 and 0.5 I’ll use band 2, 0.5 and 0.75 use band 3, etc.

I didn’t want a hard step between bands, so I’m overlapping them, and in the end I found that having quite a wide blend worked well (I’m always getting a little of all noise channels, with one channel dominant).

This all happens in the vopchop_combineNoiseAndPulse:

vop: Combine Noise And Pulse

The left side of the network is working out the band midpoint ((1 / number of channels) * 0.5 * current channel number).

The middle section gets the current pulse value, and finds how far away that value is from the band midpoint for each channel.

If I just output the value of that fit node, and reduce the overlap, hopefully you can see what this is doing a little more obviously:

Motion view of channel blending

As the red line increases, it goes through the midpoints of each noise band, and the corresponding noise channel increases in intensity.

Since I’ve lowered the overlap for this image, there are actually gaps between bands, so there are certain values of the pulse that will result in no noise at all, but that’s just to make the image clearer.

The rest of the network is multiplying the pulse value with the noise channel value, and I end up with this:

Multiplied noise and pulse value

With that done, I delete the pulse channel, and I add all the noise channels together.

The last thing I’m doing is outputting the noise in stereo, and for that I’m just arbitrarily panning the sound between two channels using noise.

I create two new stereo noise channels together, and then use a vop to flip the second channel:

Noise channel flip

So I end up with this:

Stereo balance waveform

I multiply this back on the original noise:

StereoNoise

There’s a few other little adjustments in there, like making sure the amplitude is between -1 and 1, etc.

Also, I ramp in and out the audio over the first and last 20 frames:

RampInAndOut

And that’s it!

Hopefully you’ve enjoyed this series breaking down the squab video and my adventures in CHOPs. I can’t imagine I’ll use it for audio again, but it was definitely a fun experiment.

Today is Houdini 17 release day, so I can’t wait to dig in, and play around with all the new interesting features, perhaps there will be some vellum sims for future blog posts πŸ™‚

Advertisements

Chopped Squabs – Pt 3

 

Last post, I talked about the tubers, their animation and creation of the pulse attributes.
This post will be about the bulbs on the end of the tubers, the pyro and particles setup.

Great balls of pyro

Since you don’t really get a close look at the tuber bulbs, considering the lighting and pyro cover them quite a bit, all of the bulbs have the same geometry.

Here is what the network looks like:

Bulb geometry network

This is a rather silly thing to do procedurally, I should have just sculpted something for it. Because problem solving in Houdini is fun, I tend to do everything in my home projects procedurally, but there are definitely times where that is a) overkill and b) ends up with a worse result.

Anyway, I’m starting out with a sphere:

  • Jittering the points a bit
  • Extruding the faces (to inset them)
  • Shrinking the extruded faces using the Primitive SOP on the group created by the extrude
  • Extruding again
  • Subdividing

That gives me this:

Animation of bulb geometry from primitives

I’m then boolean-ing a torus on one side, which will be where the tuber connects to the bulb.
I didn’t really need to Boolean that, since just merging the two objects and then converting to VDB and back gives me pretty much the same result, which is this:

Bulb vdb geo

Then I remesh it, and apply some noise in an Attribute VOP using the Displace Along Normal.

Before the Attribute VOP, I’ve set up a “pores” attribute, which is set to 1 on the extruded inside faces, and 0 everywhere else.
The VOP adds some breakup to the pores attribute, which is eventually used for emissive glow:

Attribute VOP for displacement and emissive

There was a very scientific process here of “add noise nodes until you get sick of looking at it”.

Here is the result:

Bulb with displacement

Looks ok if you don’t put much light on it, and you cover it with smoke πŸ™‚

Spores / smoke

I wanted the smoke to look like a cloud of spores, sort of like a fungus sort of thing, with some larger spores created as particles that glow a bit.

The source of the smoke is geometry that has a non zero “pulse” attribute (that I set up in earlier posts), but I also just add a little bit of density from the pores on the tuber bulbs all the time.

So on any given frame I’m converting the following geometry to a smoke source:

Pores and tuber geo for smoke source

The resulting smoke source looks like this:

Smoke source animation

I also copy some metaballs onto points scattered on this geometry, which I’m using in a to push the smoke around:

Magnet force metaballs

I’m feeding that metaball geometry into the magnet force in a SOP Geometry node:

DOP network for particles and pyro

I’m not going to break down the dynamics network much, it is mostly setup with shelf tools and minimal changes.

With the forces and collisions with tuber geometry, the smoke simulation looks like this:

Smoke viewport flipbook

Particles

The geometry for the tubers and bulbs is also used to create spawn points for emissive particles:

Particle spawn network

Kind of messy, but on the right I’m capping the tuber sections that I showed before, remeshing them by converting to vdb, smoothing then converting back to polygons, scattering some points, adding velocity and some identifying attributes.

On the left side, I’m scattering points on the bulbs when the density (pulse value) is high.

In both cases, I’m setting particle velocity to a constant value * @N, so they move away from the surface, and faster from the bulbs than the tubers.

There are quite a lot of them, but in the final render the emissive falls off pretty quickly, so you don’t see them all that much:

Animation of generated particles

That’s it for this post!

Still to come: Breaking down creating audio in CHOPs.

 

 

Chopped Squabs – Pt 2

 

Last post, I talked about the motion of the squab, and the points created for the tubers sprouting from its back.

This post will be about the creating the tuber geometry and motion.

A series of tubes

Here’s the tuber network:

Houdini network for tuber creation
(Click to expand the image)

The Object Merge at the top is bringing in the tuber start points, which I talked a bit about last post.

I’m pushing those points along their normals by some random amount, to find the end points.
I could have done this with a Peak SOP, but wrangles are far more fun!

float length = fit(rand(@ptnum), 0, 1, 1.3, 2.2);

@P += @N*length;

Sometimes I just find it a lot quicker to do things in VEX, especially when I want to do value randomization within a range like this.

The points chopnet, near the top, is straight out of the “Introduction To CHOPs” by Ari Danesh. I highly recommend watching that series for more detailed information on CHOPs.

The chopnet takes the end point positions, adds noise, uses spring and lag CHOPs to delay the motion of the end points so they feel like they are dragging behind a bit:

The CHOPnet for tuber end points

After that, I create a polygon between each end and start point using an Add sop:

Add sop to create tuber lines

I’m then using Refine to add a bunch of in-between points.

In the middle right of the network, there are two network boxes that are used to add more lag to the centre of each wire, and also to create a pulse that animates down the wires.

When the pulse gets to the bulb at the end of the tuber, it emits a burst of particles and smoke, but more on that in a later post.

Here is the chopnet that handles the pulse, the lag on the centre of the wires, and a slightly modified version of the pulse value that is transferred onto the bulbs:

Chopnet that creates wire centre movement and pulse attributes

The wire lag is pretty simple, I’m just adding some more lag to the movement (which I latter mask out towards both ends of the wire so it just affects the centre).

The pulse source is a little more interesting.
Before this network, I’ve already created the pulse attribute, initialized to 0. I’ve also created a “class” attribute using connectivity, giving each wire primitive it’s own id.

When I import the geometry, I’m using the class attribute in the field “organize by Attribute”:

Class used to create a channel per wire

This creates a channel for each wire.

I also have a Wave CHOP with the type set to “ramp”, and in this CHOP I’m referencing the number of channels created by the geometry import, and then using that channel id to vary the period and phase.

Here are the wave settings:

Pulse source chop Wave waveform settings

And here is where I set up the Channels to reference the Geometry CHOP:

Channel settings for Wave Chop

To visualize how this is creating periodic pulses, it’s easier if I add a 0-1 clamp on the channels and remove a bunch of channels:

Wave CHOP clamped 0-1, most channels removed

So hopefully this shows that each wire gets a pulse value that is usually 0, but at some point in the animation might slowly ramp up to 1, and then immediately drop to 0.

To demonstrate what we have so far, I’ve put a polywire on the wires to thicken them out, and coloured the pulse red so it’s easier to see:

NoodleFight

It’s also sped up because I’m dropping most of the frames out of the gif, but you get the idea πŸ™‚

The “Pulse Source Bulbs” section of the chopnet is a copy of the pulse, but with some lag applied (so the pulse lasts longer), and multiplied up a bit.

Tuber geometry

The remaining part of the network is for creating the tuber geometry, here is that section of the network zoomed in from the screenshot early in this post:

Tuber geometry creation network section

I’m creating the geometry around a time shifted version of the wires (back to the first frame), and then using a lattice to deform the geometry each frame to the animated wires.

Tuber cross-section

By triangulating a bunch of scattered points, dividing them with “Compute Dual” enabled, convert them to nurbs and closing them, I get a bunch of circle-ish shapes packed together.
There are probably better ways to do this, but it created a cross section I liked once I’d deleted the outside shapes that were distorted:

Splines created for gemetry of tuber cross section

To kill those exterior shapes, I used a Delete SOP with an expression that removes any primitive with a centre that was a certain distance from the centre of the original circle:

length($CEX, $CEY, $CEZ) > (ch(“../circle4/radx”) * 0.6)

This cross-section is then run up the wire with a Sweep:

Geometry for a single tuber

The Sweep SOP is in a foreach, and there are a few different attributes that I’m varying for each of the wires, such as the number of twists, the scale of the cross-section.

The twisting took a little bit of working out, but the Sweep sop will orient the cross section towards the Up attribute of each point.
I already have an Up attribute, created using “Add Edge Force” with the old Point SOP, it points along the length of the wire.

The normals are pointing out from the wire already, so I rotate the normals around the up vector along the length of the wire:

Rotated wire normals

The Sweep SOP expects the Up vector to be pointing out from the wire, so I swap the Normal and Up in a wrangle before the Sweep.

So, now I have the resting tuber geometry:

Tubers geometry undeformed

To skin these straight tubers to the bent wires, I use a Lattice SOP:

Tuber lattice geo

I Lattice each wire separately, because sometimes they get quite close to each other.
Last thing I do with the tubers is push the normals out a bit as the pulse runs through.

I already have an attribute that increases from 0 – 1 along each tuber, called “rootToTip”.
On the wrangle, I added a ramp parameter that describes the width of the tuber along the length (so the tuber starts and ends flared out a bit).

The ramp value for the tuber shape is fit into a range I like through trial and error, and I add the pulse amount to it, then use that to push the points out along their normals

This is the wrangle with the ramp parameter for the shape:

Tuber width wrangle

@tuberProfile = chramp("profile", @rootToTip, 0);

float tuberWidth = fit(@tuberProfile, 0, 1, 0.0, .04);
float pulseBulge = ((@pulse)*0.1*(1-@rootToTip));

@P = @P + (@N * (tuberWidth + pulseBulge));

This gives me the pulse bulge for the tubers:

Tuber pulse bulge

That does it for the tubers!

In future posts, I’ll cover the stalk bulbs, particles, pyro, rendering and audio.

 

Gears of Washroom – Pt 6

Last post was all about materials, this time around I’ll be talking rendering settings and lighting.

Rendering choices

Being one of my first renders in Houdini, I made lots of mistakes and probably made lots of poor decisions in the final render.

I experimented a bit with Renderman in Houdini, but after taking quite some time to get it set up properly, enable all the not so obvious settings for subdivision, etc, I decided this probably wasn’t the project for it.

I ended up using Mantra Physically Based Rendering, and chose to render at 1080p, 48 fps. Well… I actually rendered at 60 fps, but realized that I didn’t like the timing very much when I’d finished, and 48 fps looked better.
This is something I should have caught a lot earlier πŸ™‚

Scene Lighting

I wanted two light sources: an area light in the roof, and the explosion itself.
Both of these I just dumped in from the shelf tools.

The explosion lighting is generated from a Volume Light, which I pointed at my Pyro sim.

I was having quite a lot of flickering from the volume light, though.
I suspected that when the volume was too close to the walls, and probably a bit too rough.

To solve this, I messed around with the volume a bit before using it at a light source:

VolumeLight

So I import the sim, drop the resolution, blur it, then fade it out near the wall with a wrangle:

VolumeTrim

For the sake of the gif, I split the wall fading and density drop into separate steps, but I’m doing both those things at once in the Wrangle:

@density -= 0.12;
float scale = fit(@P.x, -75, -40, 0, .2);
@density *= scale;

So between a X value of -75 (just in front of the wall) to -40, I’m fading the volume density up from 0 to 0.2.

After that, I had no issues with flickering, and the volume lighting looked the way I wanted it!

VolumeLightFrame.png

Render time!

I think that’s it all covered!

Some stats, in case you’re interested:

  • Explosion fluid and pyro took 3 hours to sim
  • Close up bubbling fluid took about 1 hour to sim
  • Miscellaneous other RBD sims, caches, etc, about 2 hours
  • 184gb of simulation and cache data for the scene
  • Frame render times between 10 – 25 minutes each.
  • Full animation took about 154 hours to render.
    Plus probably another 40-50 hours of mistakes.
  • 12gb of rendered frames

My PC is a i7-5930k with a NVidia GeForce 970.

Hopefully I’ve covered everything that people might be interested in, but if there’s anything I’ve glossed over, feel free to ask questions in the comments πŸ™‚

Gears of Washroom – Pt 5

Last post I went through all the setup for the bubble sim, now for lighting, rendering, materials, fun stuff!

Scene materials

I talked about the texture creation in the first post, but there are also quite a lot of materials in the scene that are just procedural Houdini PBR materials.

Materials.png

Most of these are not very exciting, they are either straight out of the material palette, or they are only modified a little from those samples.

The top four are a little more interesting, though (purplePaint, whiteWalls, wood and floorTiles), because they have some material effects that are driven from the simulation data in the scene.

If you squint, you might notice that the walls and wood shelf get wet after the grenades explode, and there are scorch marks left on the walls as well.

Here is a shot with the smoke turned off, to make these effects obvious:

WetAndScorched.png

Scorch setup

To create the scorch marks in a material, I first needed some volume data to feed it.
I could read the current temperature of the simulation, but that dissipates over a few frames, so the scorch marks would also disappear.

The solution I came up with was to generate a new low resolution volume that keeps track of the maximum value of temperature per voxel, over the life of the simulation.

PyroMaxTemp

To start out with, I import the temperature field from the full Pyro sim, here is a visualization of that from about 2/3rds the way through the sim:

FullSimSmoke

I only need the back half of that, and I’m happy for it to be low resolution, so I resample and blur it:

SimplifiedSmoke

Great! That’s one frame of temperature data, but I want the maximum temperature that we’ve had in each voxel so far.

The easiest way I could think of doing this was using a solver, and merging the current frame volume with the volume from the previous frame, using a volume merge set to “Maximum”:

VolumeMaxSolver

And the result I get from this:

SimplifiedSmokeMax

So that’s the accumulated max temperature of the volume from the current frames, and all the frames before it!

Scorch in material

Back in the whiteWalls material, I need to read in this volume data, and use it to create the scorch mark.

Here is an overview of the white walls material:

whiteWallsMaterial.png

Both the wetness and scorch effects are only modifying two parameters: Roughness and Base Colour. Both effects darken the base colour of the material, but the scorch makes the material more rough and the wetness less rough.

For example, the material has a roughness of 0.55 when not modified, 0.92 when scorched and 0.043 when fully wet.

The burnScorch subnet over on the left exposes a few different outputs, these are all just different types of noises that get blended together. I probably could have just output one value, and kept the Scorch network box in the above screenshot a lot simpler.

Anyway, diving in to the burnScorch subnet:

BurnScorchSubnet.png
(Click for larger image)

One thing I should mention straight up: You’ll notice that the filename for the volume sample is exposed as a subnet input. I was getting errors if I didn’t do that, not entirely sure why!

The position attribute in the Material context is not in world space, so you’ll notice I’m doing a Transform on it, which transforms from “Current” to “World”.
If you don’t do that, and just use the volume sample straight up, you’ll have noise that crawls across the scene as the camera moves.
I found that out the hard way, 10 hours of rendering later.

Anyway, I’m sampling the maximum temperature volume that I saved out previous, and fitting it into a few different value ranges, then feeding those values into the Position and in one case Frequency of some turbulence noise nodes.

The frequency one is interesting, because it was totally a mistake, but it gave me a cool swirly pattern:

SwirlyNoise.png

When combined with all the other noise, I really liked the swirls, so it was a happy accident πŸ™‚

That’s really it for the scorch marks! Just messing about with different noise combinations until I liked the look.

I made it work for the white walls first, then copied it in to the purple walls and wood materials.

Wetness setup

Similar concept to what I did for the temperature, I wanted to work out which surfaces had come in contact with water, and save out that information for use in the material.

WetnessSetup

On the left side, I import the scene geometry, and scatter points on it (density didn’t matter to me too much, because I’m breaking up the data with noise in the material anyway):

WetnessPoints

The points are coloured black.

On the right side, I import the fluid, and colour the points white:

WetnessPointsSim

Then I transfer the colour from the fluid points onto the scatter points, and that gives me the points in the current frame that are wet!

As before, I’m using a solver to get the wetness from the previous frame, and max it with the current frame.

WrangleWetness

In this case, I’m doing it just on the red channel, because it means wetness from the current frame is white, and from the previous accumulated frames is red.
It just makes it nice to visualize:

WetnessSolver

I delete all the points that are black, and then cache out the remaining points, ready to use in the material!

Wetness in material

I showed the high level material with the wetness before, here is the internals of the subnet_wetness:

subnet_wetness.png
(Click for larger image)

So I’m opening the wetness point file, finding all points around the current shading point (which has been transformed into world space, like before).
For all wetness points that are within a radius of 7 centimetres, I get the distance between the wetness point and the current shading point, and use that to weight the red channel of the colour of that point.
I average this for all the points that were in the search radius.

In the loop, you’ll notice I’m adding up a count variable, but I worked out later that I could have used Point Cloud Num Found instead of doing my own count. Oh well πŸ™‚

I take the sampled wetness, and feed it into a noise node, and then I’m basically done!

If you want an idea of what the point sampled wetness looks like before feeding it through noise, here is what it looks like if I bypass the noise and feed it straight into baseColour for the white walls (white is wet, black is dry):

WetnessPointSample.png

Next up, Mantra rendering setup and lighting, should be a rather short post to wrap up with πŸ™‚

Gears of Washroom – Pt 4

Last post was all about Pyro and FLIP preparation for the explosion.
I’m going to go off on a bit of a tangent, and talk about the bubbling fluid sim at the start of the animation!

16.5 to the rescue

Houdini 16.5 came out just as I was starting to build this scene, and they addedΒ air incompressibility to FLIP.

I’d been doing some experiments trying to make bubbling mud in a separate scene:

FlippedOut

As you can probably tell, I didn’t have a great deal of luck πŸ˜‰

Fluid spawning setup

BubbleFluid

The network starts with importing the glass sections of the grenades.

There are three outputs: The geometry to spawn fluid from, a volume used for collision and volumes for fluid sinks.

The fluid spawn and collision is pretty straightforward, I’m using the Peak SOP to shrink the glass down, then the Fluid Source SOP to output an SDF for the collision:

CollisionVolume.png

^ The collision volume is pretty πŸ™‚

The sink volumes are a little more interesting.
The Copy SOP is copying onto some points I’ve spawned.
The points are scattered on the bottom part of the glass:

CopyTemplate

The geometry that is getting copied onto those points is a sphere that scales up and down over a certain frame range.

In the Copy SOP I’m stamping the copy id for each point, and the sphere is scaled using an attribute wrangle:

float lifeSpan = 220;
float maxScale = 0.9;
float minScale = 0.4;
float randParticle = @copyID+(rand(@copyID)*.3);
float bubbleLife = 0.06;

float zeroToOneTime = (@Frame%lifeSpan)/lifeSpan;
float distanceFromTimePoint = abs(zeroToOneTime - randParticle);
float dftpZeroOne = fit(distanceFromTimePoint, 0, bubbleLife, 0, 1);

float scale = cos(dftpZeroOne * ($PI/2));
if (scale > 0) scale += minScale;
@P *= scale * maxScale;

This ended up being far more complicated than it needed to be, I was randomly adding to it until I got something I liked πŸ™‚

I could strip most of the lifespan stuff out entirely: I was originally using that so that each point could spawn spheres multiple times, but that ended up being too aggressive.

Anyway, for each point, there is a range of frames within the lifespan where the sphere is scaled up and down.

With a low lifespan, this is what I’m getting:

Bubbles

The spheres get converted to a sink volume, and is used to allow fluid to escape the sim.
Where the fluid escapes, bubbles are created!

This is another case where I used the shelf tools for sink volumes in another scene, had a look at what it produced, then recreated it here.
I really recommend doing that sort of thing, it can really help with iteration time when your scenes get complicated!

Fluid sim and trim

BubbleFlip

The flip solver is pretty straightforward!

The collision volumes are imported as source volumes, and passed in to velocity and post solve sourcing (again, I worked this setup out with shelf tools).

Air incompressibility is just a checkbox option:

BubbleFlipAir

That’s it for the sim, on to the surfacing network:

fluidCloseSurface

I had some issues when solving on lower settings, where I’d get a lot of fluid leaking out of the container.
To solve this, I’m importing the glass into this network, then converting to a VDB:

TrimVDB

Right under that, I use an attribute wrangle to copy “surface” from the VDB to the points, and I use that surface value to delete any outside points.

Here’s a mockup example (since my final sim only had very minor leaking), where I’ve manually moved a bunch of points outside the glass, and I’m colouring them with the surface value:

VDBCull

Now time to surface the fluid.

Usually I would just use the Particle Fluid Surface SOP to do this, but I tried a number of approaches with it, and I was always either losing the internal bubbles, or not getting the desired results, so I built the surface with some of the nodes that the Particle Fluid Surface SOP uses internally.

First of all, VDB from Particles, here is a cutaway of that:

VDBFromParticles

The surface is pretty rough, though!
I smooth it out with a VDB Smooth SDF:

VDBSmooth1

I didn’t want to smooth out the interior any further, but the exterior is still too rough.
Similar to how I was deleting points before, I use the glass to create another VDB, and I use that as the mask for a second VDB smooth:

VDBSmooth2

And that result, I was happy with!

You might have noticed by now that I’ve only been simulating one grenade in the fluid bubble sim.
I figured that since I was using heavy depth of field, I could get away with using the same fluid sim for all of the grenades, and just copy it onto each grenade:

CopyStampSim.png

To make the duplication less obvious, I simulated the fluid sim a bit over 60 frames more than I needed, and I used a time shift on the 2nd and 3rd grenade to offset the simulation, so the same bubbles aren’t coming up at exactly the same time.

The copy node stamps the GrenadeID, which the timeshift node uses:

TimeShiftStamp

And now we have bubbling grenade water sims!

RemeshedOffsetSims.png

(Re-meshed in this shot, because the render mesh is 1 million polygons :))

Still to come in this series of posts: Mantra rendering setup, lighting, materials, fun with scorching and wetting walls!

Gears of Washroom – Pt 3

Last post I talked about some of the RBD prep for the grenade explosions, now time to talk about Pyro and Flip prep!

Every Sim in its right place

All of the simulations (RBD, Pyro and FLIP) live in the same DOP network:

AutoDop_fluid
(Click on image to expand)

The network itself isn’t terribly exciting, it was mostly built using by testing out shelf tools in other files, then I’d reconstruct them to try and understand them a bit better. I might dive into it a little later in this blog series.

Now, on to the networks that feed this beast!

Explosion fuel and force

FuelAndForce
(Click on image to expand)

In this network, I’m importing the grenade geometry and using it as a basis for generating fuel for the Pyro simulation, and also forces that affect the FLIP and RBD sims.
Sorry about the sloppy naming and layout of this one, it’s not as readable as it should be.

Splodey time!

Early in the grenades network, I created a primitive attribute called “splodeyTime”, set about 10 frames apart for each grenade.
This attribute drives the timing of all of the explosion related effects.

For example, I use it to set the active time for the RBD pieces in an attribute wrangle:

if (@Frame > (@splodeyTime-2)) i@active = 1;

You can also see it in the first loop in my Explosion Fuel and Force network:

FuelAndForce_foreach1

Here, I’m doing some similar promotion tricks like in the last post:

  • Iterating over pieces (in this case each piece is a grenade base)
  • Promoting splodeyTime up to a detail attribute.
  • Creating a single new point in the centre of the current grenade, using the Add SOP:
    AddPoint
  • Deleting all geometry except for that point.
  • Also delete the point, if:
    • Current frame number < (splodeyTime – 2)
    • Current frame number > (splodeyTime + 2)

So at the end of the foreach, each grenade will be represented by a single point, that will only exist for 4 frames around the splodeyTime of that grenade.

Great metaballs of fire!

The first thing I do with those points, is generate a cigar of metaballs:

MetaballCigar
I’m using this rather dubious looking thing to break constraints on the RBD (in the Remove Broken SOP solver in the DOP network diagram at the top of the post).

There are two more branches that come off the metaball cigar.

MetaballAndPoints

The left branch just inflates each of the metaballs, it is fed into a Magnet Force, and is only used on the RBD objects, to blow the grenade fragments apart.

The right branch is very similar, except after a slightly more aggressive inflation, I’m generating points from the volume of the metaballs, and adding a velocity attribute to them.

The points, that you can see in the image, are used to drive force in the water that explodes out of the grenades, in the FLIP fluid.

One thing I wanted to do here was to have fluid that is near the wall shoot out pretty straight, but then add a bit more randomness when the fluid is near the chain end of the grenade.

Not the best way to visualize it, but these are the force vectors for that effect:

FluidForceVectors

To start out, I set the velocity attribute to {10,0,0} in an attribute wrangle.

Then, the randomness is achieved in the creatively named attribvop1 in the right branch a few images back, and here is what that looks like:

ForceVectorVOP.png

Stepping through this network a bit…
I’m creating a new noisey velocity, and mixing that with the existing straight down X velocity.

I always want the noisey velocity to be positive in X, but to be much stronger in Y and Z, hence the splitting out of the X component and the abs after the bottom noise.

The mix node uses the X position of the point to work out how much of the straight velocity to use versus the noisey velocity.
So between 0 – 5 in X, I’m blending the two. When the position is greater than 5, we have 100% of the noisey vector.

I messed about with lots of different vector / noise patterns, but I liked the movement this gave my fluid, so I stuck with this πŸ™‚

Fuel

Backing up a bit, there’s one more branch of this network, and it creates the fuel for my Pyro simulation:

FuelNetwork

I’m copying a sphere onto the explosion points, and then usingΒ a fluid source SOP to create a volume for fuel, and using the built in noise settings to get some variation.

The sphere itself is getting scaled by splodeyTime, using copy stamping in uniform scale:

fit((stamp(“../copy2/”, “splodeyTime”, 1) – $F), -2, 2, 4, 10)

I also move it forward each frame using an attribute wrangle:

@P.x += ((@Frame+4) – @splodeyTime)*2;

This is what the fuel looks like over the few frames it is generated:

Fuel

Flipping glass

One last thing for simulation prep, I’m using the glass sections of the grenade as the fluid source of the exploding Flip sim.

Again, this uses the splodeyTime to delete the glass when it is outside of the frame range, so I’m creating the surface volume and particles just for a few frames:

FlipParticlesAndSurface

That’s pretty much each for simulation prep, at least for the main explosion!

Still to come, the bubbling water at the start, and all things materials and rendering.