AI budget tool (a 2014 revisit)

Blast from the past

I originally drafted this blog post a year or so after Splinter Cell: Blacklist launched, and I was still at Ubisoft (around 2014)!

I decided to hold off publishing it since there weren’t any active Splinter Cell projects at the time, and I always figured I’d come back to it and hit publish at a later date.
And well… here we are, very exciting stuff to come from Ubisoft Toronto and their partner studios 🙂

I’ve left the blog post largely as I wrote it back then, and in hindsight it’s pretty funny to think that I was working in Unreal 2, on a game mode that was inspired by the Gears Of War Horde game modes, years before I made the move to The Coalition to work on Gears!

Extraction (Charlie’s Missions)

When working on Splinter Cell: Blacklist, we had guidelines for the numbers of AI spawned.

So a heavy AI might be worth 1.1 bananas, and a dog 0.7 bananas, with a total banana budget of 10. The numbers roughly mapped to CPU budgets in milliseconds, but the important thing really was the ratio of costs for the various archetypes.

It’s a tricky thing to manage AI budgets across all the game modes and maps, and probably something that the Design department and AI programmers lost lots of sleep over.

Where it got particular tricky was in the CO-OP Extraction game mode.

The game mode has “waves” (a round of AI that is finished only when all of the enemies have been dealt with).

Within waves there are sub-waves, where AI have various probabilities of spawning, and these sub-waves can start either based off numbers of take downs in a previous sub-wave, or based off time.

Sanaa

So the player, for example, could just happen to let the most computationally expensive enemies live in a sub-wave, take out all the computationally cheap enemies (with tactical knock out cuddles, of course), and the next sub-wave could spawn in and we’d blow our AI budgets!

The team building the co-op maps in our Shanghai studio were great at sticking to the budgets, but this variation in the spawning for AIs was obviously going to be very hard to manage.

Having our QC team just test over and over again to see if the budgets were getting blown was obviously not going to be very helpful.

XML and C#/WPF to the rescue

Luckily, one of the engineers who was focused on Extraction, Thuan Ta, put all of the Extraction data in XML. This is not the default setup for data in the Unreal engine, almost all of the source data is in various other binary file formats, but his smart choice saved us a lot of pain.

It made it incredibly easy for me to spend a week(ish) bashing together this glorious beast:

AmmanUI

A feat of engineering and icon design, I hear you say!!
Certainly can never be enough Comic Sans in modern UI design, in my opinion…

What is this I don’t even

Each row is an AI wave that contains boxes that represent varying numbers of sub-waves.

The sub-wave boxes contain an icon for each of the different AI types it might spawn, assuming the worst case (most expensive random AI) for that sub-wave (heavy, dog, tech, sniper, regular with a helmet, etc):

5 icons for different AI types: a Heavy, a dog, a tech, a sniper and a regular with a helmet

The number at the top right of each sub-wave box is the worst case AI cost that can occur in that sub-wave, and it can be affected by enemy units that carry over from the previous sub-wave:

Estimated worst case AI cost for a sub-wave, red arrow pointing to the number on the UI screenshot

So, for example, if sub-wave 1 has a chance of spawning 0-2 heavies, and 1-3 regulars, but only to a max number of 4 enemies, the tool will assume 2 heavies get spawned (because they are more expensive), and 2 regulars get spawned to estimate the worst cost AI for the sub-wave.

If sub-wave 2 then has a trigger condition of “start sub-wave 2 when 1 enemy in sub-wave 1 is taken out” (killed, or convinced to calmly step away and consider their path in life), then the tool would assume that the player chose to remove a regular in sub-wave 1, not a heavy, because regulars are cheaper than heavies.

Following this logic, the cost of each sub-wave is always calculated on the worst cases all the way to the end of the wave.

Long lived

Sometimes you’d want to know, at a glance, which enemies in a sub-wave can live on to the next sub-wave.

If you mouse over the header part of a sub-wave (where the orange circle is below), all the units that are created in that sub-wave are highlighted red, and stay highlighted in the following waves indicating the longest they can survive based off the trigger conditions for the following sub-waves:

WaveHeader

So in the above case, the heavies that spawn in Wave 15, Sub-wave 1 can survive all the way through to sub-wave 3.

This is important, because if sub-wave 3 was over budget, one possible solution would be to change the condition on sub-wave 2 to require the player to take out one additional unit.

Also worth pointing out, the colour on the sub-wave bar headers are an indication of how close to breaking the budget we are, with red being bad. Green, or that yucky browny green are fine.
The colour on the bar on the far left (on the wave itself) is representative of the highest cost of any sub-wave belonging to this wave.
So you can see at a glance if any wave is over budget, and then scroll the list box over to find which sub-wave(s) are the culprits.

Listboxes of listboxes of listboxes

There’s about 300 lines of XAML UI for this thing, and most of it is a set of DataTemplates that set up the three nested listboxes: One containing all the waves, a listbox in each wave for the subwaves, a listbox in each sub-wave for the AI icons.

Each of the icon blocks has its own DataTemplate, which just made it easier for me to overlay helmets and shields onto the images for the different AI variants:

<datatemplate x:key="EAIShieldedHeavyController_Template" datatype="{x:Type local:Enemy}">
	<grid>
		<rectangle fill="Black" width="30" height="30" tooltip="Heavy + Shield">
			<rectangle.opacitymask>
				<imagebrush imagesource="pack://application:,,,/Icons/Heavy.png">
			</imagebrush></rectangle.opacitymask>
		</rectangle>
		<rectangle horizontalalignment="Right" verticalalignment="Bottom" fill="Green" width="15" height="15">
			<rectangle.opacitymask>
				<imagebrush imagesource="pack://application:,,,/Icons/Shield.png">
			</imagebrush></rectangle.opacitymask>
		</rectangle>
	</grid>
</datatemplate>

System.Xml.Linq.Awesome

Probably goes without saying, but even in a horrible hard-codey, potentially exception ridden hacky way like the way I was using it in this application, using the XDocument functionality in Linq makes life really easy 🙂

I definitely prefer it to XPath, etc.

Forgive me for one line Linq query without error handling, but sometimes you’ve got to live on the wild side, you know?:

_NPCTemplates = SourceDirInfo.GetFiles("*.ntmp").Select(CurrentFile => XDocument.Load(CurrentFile.FullName)).ToList();

And with those files, pulling out data (again, with error/exception handling stripped out):

foreach (XDocument Current in _NPCTemplates)
{
	// Get a list of valid NPC names
	foreach (XElement CurrentNPC in Current.Descendants("npc"))
	{
		List NameAttr = CurrentNPC.Attributes("name").ToList();
		if (NameAttr != null)
		{
			// Do things!!
		}
	}
}

Conclusion

Although it’s nothing particularly fancy, I really do like it when programmers choose XML for source data 🙂

It makes life really really easy for Tech Art folk, along with frameworks like WPF that really minimize the plumbing work you have to do between your data models and view, as well as making very custom (ugly) interfaces possible using composition in XAML.

Beats trying to create custom combo boxes in DataGrids in Borland C++ at any rate 😛

Also, Comic Sans. It’s the future.

Chopped Squabs – Pt 2

 

Last post, I talked about the motion of the squab, and the points created for the tubers sprouting from its back.

This post will be about the creating the tuber geometry and motion.

A series of tubes

Here’s the tuber network:

Houdini network for tuber creation
(Click to expand the image)

The Object Merge at the top is bringing in the tuber start points, which I talked a bit about last post.

I’m pushing those points along their normals by some random amount, to find the end points.
I could have done this with a Peak SOP, but wrangles are far more fun!

float length = fit(rand(@ptnum), 0, 1, 1.3, 2.2);

@P += @N*length;

Sometimes I just find it a lot quicker to do things in VEX, especially when I want to do value randomization within a range like this.

The points chopnet, near the top, is straight out of the “Introduction To CHOPs” by Ari Danesh. I highly recommend watching that series for more detailed information on CHOPs.

The chopnet takes the end point positions, adds noise, uses spring and lag CHOPs to delay the motion of the end points so they feel like they are dragging behind a bit:

The CHOPnet for tuber end points

After that, I create a polygon between each end and start point using an Add sop:

Add sop to create tuber lines

I’m then using Refine to add a bunch of in-between points.

In the middle right of the network, there are two network boxes that are used to add more lag to the centre of each wire, and also to create a pulse that animates down the wires.

When the pulse gets to the bulb at the end of the tuber, it emits a burst of particles and smoke, but more on that in a later post.

Here is the chopnet that handles the pulse, the lag on the centre of the wires, and a slightly modified version of the pulse value that is transferred onto the bulbs:

Chopnet that creates wire centre movement and pulse attributes

The wire lag is pretty simple, I’m just adding some more lag to the movement (which I latter mask out towards both ends of the wire so it just affects the centre).

The pulse source is a little more interesting.
Before this network, I’ve already created the pulse attribute, initialized to 0. I’ve also created a “class” attribute using connectivity, giving each wire primitive it’s own id.

When I import the geometry, I’m using the class attribute in the field “organize by Attribute”:

Class used to create a channel per wire

This creates a channel for each wire.

I also have a Wave CHOP with the type set to “ramp”, and in this CHOP I’m referencing the number of channels created by the geometry import, and then using that channel id to vary the period and phase.

Here are the wave settings:

Pulse source chop Wave waveform settings

And here is where I set up the Channels to reference the Geometry CHOP:

Channel settings for Wave Chop

To visualize how this is creating periodic pulses, it’s easier if I add a 0-1 clamp on the channels and remove a bunch of channels:

Wave CHOP clamped 0-1, most channels removed

So hopefully this shows that each wire gets a pulse value that is usually 0, but at some point in the animation might slowly ramp up to 1, and then immediately drop to 0.

To demonstrate what we have so far, I’ve put a polywire on the wires to thicken them out, and coloured the pulse red so it’s easier to see:

NoodleFight

It’s also sped up because I’m dropping most of the frames out of the gif, but you get the idea 🙂

The “Pulse Source Bulbs” section of the chopnet is a copy of the pulse, but with some lag applied (so the pulse lasts longer), and multiplied up a bit.

Tuber geometry

The remaining part of the network is for creating the tuber geometry, here is that section of the network zoomed in from the screenshot early in this post:

Tuber geometry creation network section

I’m creating the geometry around a time shifted version of the wires (back to the first frame), and then using a lattice to deform the geometry each frame to the animated wires.

Tuber cross-section

By triangulating a bunch of scattered points, dividing them with “Compute Dual” enabled, convert them to nurbs and closing them, I get a bunch of circle-ish shapes packed together.
There are probably better ways to do this, but it created a cross section I liked once I’d deleted the outside shapes that were distorted:

Splines created for gemetry of tuber cross section

To kill those exterior shapes, I used a Delete SOP with an expression that removes any primitive with a centre that was a certain distance from the centre of the original circle:

length($CEX, $CEY, $CEZ) > (ch(“../circle4/radx”) * 0.6)

This cross-section is then run up the wire with a Sweep:

Geometry for a single tuber

The Sweep SOP is in a foreach, and there are a few different attributes that I’m varying for each of the wires, such as the number of twists, the scale of the cross-section.

The twisting took a little bit of working out, but the Sweep sop will orient the cross section towards the Up attribute of each point.
I already have an Up attribute, created using “Add Edge Force” with the old Point SOP, it points along the length of the wire.

The normals are pointing out from the wire already, so I rotate the normals around the up vector along the length of the wire:

Rotated wire normals

The Sweep SOP expects the Up vector to be pointing out from the wire, so I swap the Normal and Up in a wrangle before the Sweep.

So, now I have the resting tuber geometry:

Tubers geometry undeformed

To skin these straight tubers to the bent wires, I use a Lattice SOP:

Tuber lattice geo

I Lattice each wire separately, because sometimes they get quite close to each other.
Last thing I do with the tubers is push the normals out a bit as the pulse runs through.

I already have an attribute that increases from 0 – 1 along each tuber, called “rootToTip”.
On the wrangle, I added a ramp parameter that describes the width of the tuber along the length (so the tuber starts and ends flared out a bit).

The ramp value for the tuber shape is fit into a range I like through trial and error, and I add the pulse amount to it, then use that to push the points out along their normals

This is the wrangle with the ramp parameter for the shape:

Tuber width wrangle

@tuberProfile = chramp("profile", @rootToTip, 0);

float tuberWidth = fit(@tuberProfile, 0, 1, 0.0, .04);
float pulseBulge = ((@pulse)*0.1*(1-@rootToTip));

@P = @P + (@N * (tuberWidth + pulseBulge));

This gives me the pulse bulge for the tubers:

Tuber pulse bulge

That does it for the tubers!

In future posts, I’ll cover the stalk bulbs, particles, pyro, rendering and audio.

 

Chopped Squabs – Pt 1

 

The above video is a thing that wasn’t really supposed to be a project, but turned into one 🙂

This series of blog posts will break down how I made it!

This first post will be some background on the project, and how I set up the movement of the squab and spawned the initial points for the tubers that are attached to it.

But why

This whole thing was a feature exploration of CHOPs that went off the rails a bit.

While working on another Houdini rendering project at home, I needed to dynamically re-parent an object half way through an animation.

Luckily, there is a shelf tool for that!

Parent blend shelf tool in Houdini

When I ran it, Houdini created a constraints network, and it seemed like nothing else had changed.
The object was suddenly dynamically parented at that frame, but there were no attributes, or connected nodes, or anything that I would expect to see generated.

Output constraints network from parent blend

Just a new network, and a reference to the constraints on the geometry node at the obj level.
So, magic basically?

Get to the chopper

I’ve never really used CHOP networks before, and it seemed like a good time to see how they work. Much like the other contexts in Houdini, there is an endless amount of learning and exploration that can be done here, but I started out watching the great “Introduction To CHOPs” videos by Ari Danesh (@particlekabob).

https://www.sidefx.com/tutorials/lesson-1-intro-to-chops/

My squab video was the result of me getting distracted and going off on a tangent while watching the training!

If you’re interested in CHOPs, I’d really recommend watching those videos. I’m not going to go very in depth over things that Ari already covered.

Setting up the squab

In case you’re not familiar with it, the “squab” is a test asset that ships with Houdini.

Houdini squab test asset

If you happen to know who made the squab, please let me know and I’ll edit this post with that info 🙂

I decided I wanted to create tuber like growths off the side of it, with bulbous end parts that would emit spores, which travel up the tuber in a pulse.

The first network to dive into sets up the points on the squab that will spawn tubers, and also sets up the FEM (soft body) data for the squab.

Network for setting up squab and tubers and bulbs

I won’t go into details on the two bottom right network branches: These are selecting some points that are fixed for the FEM solver, and mostly standard nodes created by the FEM Organic Mass shelf tool.

The left “Tuber points” branch scatters some points on the squab body, taking the normals from the surface.
I didn’t want many tubers coming out towards the front of the squab, so I deleted points based off the magnitude of the Z component of the normal (a Z value of 1 means the point is facing straight forward in “squab space”, since the squab is aligned to the world Z).

Delete points by z component magnitude

The next issue is that some of the generated tubers would crash through the squab geometry.
I didn’t need to entirely solve this, since it’s a dark scene with enough going on to mask some intersections, but there were some pretty obvious bad ones:

Tuber intersection with squab geometry

The first thing I tried worked out well enough, which was to ray trace out from the tuber points and see if I hit any squab geometry. If I hit geometry, I delete the point.

It’s a little ugly, but here’s the VOP that collects the hits:

Ray trace VOP to exclude points that would intersect squab

There are two non-deterministic random nodes, and to force those into the loop body I’ve just exposed the “type” as a constant, and fed it into the loop. Makes no sense, might not be necessary, but it gets the random nodes into the loop body 🙂
That’s part of what makes it messy.

Each iteration of the loop is a sample, the number of samples is a parameter exposed to the network.
I use a SampleSphere node to create sample points on a cone around the normal, with an angle of 45 degrees. I use the vector to those points as the raydir of an intersect node, and trace out from the point and see if I hit anything. I store the max of the hitprim into an attribute, and then I just delete any points where this “hitval” is greater than 0 (the default hitprim is -1).

Tuber geo with reduced points from ray trace intersection

You can see that running this pass removes quite a lot of invalid tubers, I didn’t mind it being too aggressive.
A smarter person would have done this non procedurally and just manually placed points, I probably need to start doing that more often 🙂
Proceduralism for the sake of it is a great way to waste a lot of time…

Chops transforms

On the main network graph, there are three Transform nodes that I’ve coloured aqua (the ones toward the bottom of each of the three network branches), these are fed data from the “motionfx” CHOPs network.

I’m using random noise to move the body of the squab (and the attached tuber points) around.
The “Body Motion” is almost straight out of the tutorial by Ari Danesh, here’s what the “motionfx” network looks like:

Chops network for body and tuber points motion

First thing to note, the tuber points and chops body get animated in exactly the same way.
The top left of the graph I use channel nodes to create transforms for the tuber points and the squab body:

Channel creation for tuber points

Then I merge them together, and work with 6 channels for the rest of the network.
However, this is unnecessary!

I could have worked with a single set of data, and then made the export node write out to the multiple transforms. This is something I learnt from the tutorials after I’d implemented things this way 🙂

Move yo body

The Body Motion group of nodes was largely trial and error: I added a sine wave to some noise, and added some filters and lags until the motion looked nice.

Not very scientific, but here’s what the waveform results of each of those nodes are:

Body motion waves and noises

That’s over 600 frames, btw.

And that gives me something like this:

Squab body movement without rotation gif

I wanted to get a sort of floating in water currents feel, and I was fairly happy with the result.

I also wanted the body to rotate a little with the movement.
I tried a few things here, such as a CHOPs lookat constraint, various VOP things.
In the end, I did a very hacky thing:

  • Took the position of the original waveform, and moved it one unit down in Y
  • Applied a lag to that, then subtracted it from the original position waveform
  • Multiply it by some constants
  • Rename it to be the rotation channels

If you imagine the squab body as a single animated point, this is like attaching another point a little below it, and attaching this new point to the first point using a spring.
Then, measuring the distance between the points, and using that distance as a rotation.

A bit of a weird way to do it, but it gets the job done! 🙂

I’ve overexaggerated the rotation and lag in this gif just to make it obvious:

Squab rotate gif

In future posts, I will break down the creation of the tubers, some of the FX work, and also creating the sound in CHOPs!

Shopping for masks in Houdini

Houdini pun there, don’t worry if you don’t get it, because it’s pretty much the worst…

In my last post, I talked about the masking effects in Shangri-La, Far Cry 4.

I mentioned that it would be interesting to try out generating the rough masks in Houdini, instead of painting them in Modo.

So here’s an example of a mask made in Houdini, being used in Unreal 4:

VortUE4Houdini.gif

Not horrible.
Since it moves along the model pretty evenly, you can see that the hands are pretty late to dissolve, which is a bit weird.

I could paint those out, but then the more I paint, the less value I’m getting out of Houdini for the process.

This is probably a good enough starting point before World Machine, so I’ll talk about the setup.

Masky mask and the function bunch

I’ve exported the Vortigaunt out of Modo as an Alembic file, and bring it into Houdini.
Everything is pretty much done inside a single geometry node:

MaskGen_all

The interesting bit here is “point_spread_solver”. This is where all the work happens.

Each frame, the solver carries data from one vertex to another, and I just manually stop and bake out the texture when the values stop spreading.

I made the un-calculated points green to illustrate:

VortGreen

A note on “colour_selected_white”, I should really do this bit procedurally. I’m always starting the effect from holes in the mesh, so I could pick the edge vertices that way, instead of manually selecting them in the viewport.

The solver

MaskGen_point_spread_solver

Yay. Attribwrangle1. Such naming, wow.

Nodes are fun, right up until they aren’t, so you’ll often see me do large slabs of functionality in VEX. Sorry about that, but life is pain, and all that…

Here’s what the attrib wrangle is doing:

int MinDist = -1;

if (@DistanceFromMask == 0)
{
	int PointVertices[];
	PointVertices = neighbours(0, @ptnum);

	foreach (int NeighborPointNum; PointVertices)
	{
		int success             = 0;
		int NeighborDistance    = pointattrib(
						1, 
						"DistanceFromMask", 
						NeighborPointNum, 
						success);

		if (NeighborDistance > 0)
		{
			if (MinDist == -1)
			{
				MinDist = NeighborDistance;
			}

			MinDist = min(MinDist, NeighborDistance);
		}
	}
}

if (MinDist > 0)
	@DistanceFromMask = (MinDist + 1);

Not a very nuanced way of spreading out the values.

For each point, assuming the point has a zero “distance” value, I check the neighboring points.
If a neighbor has a non-zero integer “distance” value, then I take the lowest of all the neighbors, add one to it, and that becomes my “distance” value.

This causes the numbers to spread out over the surface, with the lowest value at the source points, highest value at the furthest distance.

Integers –> Colours

So, the vertices now all have integer distance values on them.
Back up in the mask image, the solver promotes the Distance value up to a Detail attribute, getting the Max Distance of all the points.

In the wrangle node under that, I just loop through all the points and divide each point’s Distance by the Max Distance, and use that to set the colour, or I set it as green if there’s no distance value:

if (@DistanceFromMask > 0)
{
    @Cd = float(@DistanceFromMask - 1) / float(@DistanceFromMaskMax);
}
else
{
    @Cd = {0,1,0};
}

So that produces the gif I showed earlier with the green on it.

Colours –> Textures

Time to jump into SHOPS. See? This is where my awesome title pun comes in.

As simple as it gets, vertex Colour data straight into the surface output:

Material

In my “Out”, I’m using a BakeTexture node to bake the material into a texture, and I end up with this:

vortigaunt_mask_houdini

Conclusion

Bam! Work is done.
Still wouldn’t have been much point in doing this on Shangri-La, because painting masks in Modo is super quick anyway, but it’s fun to jump back into Houdini every now and then and try new things.

Has led to some other interesting thoughts, though.

  • For Shangri-La, we could have done that at runtime in a compute shader, and generated the mask-out effect from wherever you actually shot an arrow into an enemy.
    That would have been cool.
  • You could probably use Houdini Engine to put the network into UE4 itself, so you could paint the vertex colours and generate the masks all inside UE4.
  • You could do the “erosion” part in Houdini as well, even if you just subdivide the model up and do it using points rather than run it in image space (to avoid seams). Might be hard to get a great resolution out of it.
  • You could do an actual pressure simulation, something along the lines what this Ben Millwood guy did here. He’s a buddy of mine, and it’s a cool approach, and it’s better than my hacky min values thing.

Dead Space fan art: Necrotle

Necrotle

Right in time for Halloween, meet a Necrotle!
That’s a Dead Space Necromorph turtle, fyi.

I started on this guy about 5 years ago, while I was working at Visceral Games in Melbourne. I wasn’t on the Dead Space project(s), I just felt like doing some fan art, and decided to come up with the most silly idea for an animal Necromorph I could think of (a giraffe was also in the plans, at one point…) 🙂

As with many of my home projects, I got sick of it and shelved it for a while. Decided a few weeks ago to resurrect the little fella!
And now I’m sick of looking at it again, and I’m calling it done 😉

Started with a very basic sculpt in 3dcoat, then modeling, additional sculpting, texturing, rendering in Modo.

Houdini? Who don’t-y?

I’ve been waiting about a year to use that blog post title. Don’t judge me…

I bought Houdini Indie about a year ago, and up until a few months ago I hadn’t used it.
In the last few months, I’ve started learning fracturing and pyro effects (smoke, fire, etc).

In this video, I’m fracturing an object and generating “smoke” (dust is the intention, but I haven’t added particles to it, so it definitely looks like smoke).

Brief background on Houdini fluid sims

Very brief, because I’m still learning 😛

In Houdini, you create volumetric fields of data that drive fluid simulations, much like in other software like FumeFX.

For a smoke sim, you can get away with just Density and Heat. The Density controls how much smoke gets added per frame (although like everything in Houdini, this is a loose definition). The Heat will move the smoke around using gas pressure simulations.

I’m also using a Velocity field, because it’s one way of getting the pieces of my fractured geometry to disturb the smoke as they move through the fluid.

Each piece of the fractured geometry is glued to pieces next to it using “glue constraints”. These break either when I manually break them, or when a certain amount of force is applied to them.

The goal of this scene

There are plenty of ways of setting up the Smoke Density, and the most common one I’ve seen is just adding Density to the fluid in places where geometry is moving at a certain speed.

Instead of that, I wanted to add dust when a constraint breaks (based off the mass of the pieces), and only add it to the sim if the piece is moving above a certain speed.

The end results are not a great deal different, but there’s a few things I like:

  • Small pieces can shed all their dust before they hit the ground. You don’t end up with streamers of dust all the way to the ground just because something is moving fast.
  •  There’s good variation in the amount of smoke/dust that pieces generate, due to the mass being factored in.
  • A group of pieces can fall off as a chunk, generating some smoke for a few frames. When that chunk hits the ground and breaks again, the broken constraints can generate more smoke. This could look really nice if I had a more complicated scene setup 🙂

The setup

This is what my scene looks like:

OverallNetwork

There is a tube that I fracture, a ground plane, two simulations (fracturing and the smoke fluid sim) and “SmokeSource” which is where I generate the fields for the fluid simulation.

Tube object (fracture setup)

I won’t go too much into the Fracture setup, because it’s pretty standard, but here’s what that network looks like:

TubeObjectNetwork

So the top bit does a voronoi fracture on the geometry, the middle bit adds a “depth” value attribute, which is how far each point is from the original surface of the object (I intended to use this for something, but then… didn’t).
The left side sets up which pieces of geo are active, using a box to select the ones I want (everything except the base of the cylinder, basically). The right side creates the glue constraints.
There’s a few File nodes to cache things out to disk.

Most of this is set up through standard shelf tools.

Collapse Sim

CollapseSimNetwork

Again, pretty basic stuff, most of this is created when you use shelf tools to setup a sim.
The only interesting bits in this are the “Geometry Wrangle” node at the top, and a few things I added to the “Remove Broken” solver.

Geometrywrangle_dust

This  is where I’m doing most of the dust setup work (although probably shouldn’t, more on that later…).

Here’s the VEX code:


vector c = point("op:/obj/tube_object1/OUT_ACTIVEPOINTS", "Cd", @ptnum);
i@active = (int)c.r;

float DustPerKilo = 0.2;
float DustLiberatedPerMetrePerSecond = 350.0;
float MinimumSpeedForDustLiberation = 0.4;
float MaximumSpeedForDustLiberation = 6.0;
float LiberatedDustDissipationRate = 60.0;
string GlueConstraintPath = "op:/obj/CollapseSim:Relationships/glue_tube_object1/constraintnetwork/Geometry";
string GeoPath = "op:/obj/CollapseSim:tube_object1/Geometry";

int NumPieceAttributes = 2;

for (int PieceCount = 1; PieceCount <= NumPieceAttributes; PieceCount++)
{
	/*
	* This is horrible, and would break down for constraints that had more than 2 pieces...
	* Attributes are "Piece1, Piece2" the first time through. "Piece2, Piece1" the next
	*/
	string AttributeToFind = "Piece" + itoa(PieceCount);
	string AttachedPieceAttribute = "Piece" + itoa((PieceCount%NumPieceAttributes) + 1);

	// First, get the number of glue constraints that have this piece as "piece 1"
	int NumberOfGlues = findattribvalcount(GlueConstraintPath, "prim", AttributeToFind, @ptnum);

	int ConnectedPieces[] = {};

	for (int Count = 0; Count < NumberOfGlues; Count++)
	{
		int Success;
		int CurrentGlueConstraintIndex = findattribval(GlueConstraintPath, "prim", AttributeToFind, @ptnum, Count);

		int PieceVertIndex = primattrib(GlueConstraintPath, AttachedPieceAttribute, CurrentGlueConstraintIndex, Success);
		string PieceName = pointattrib(GlueConstraintPath, "name", PieceVertIndex, Success);
		string Bits[] = split(PieceName, "/");
		ConnectedPieces[len(ConnectedPieces)] = atoi(re_find("([0-9]+)", Bits[1]));
	}
}

int RemovedPieces[] = {};
float RemovedPieceMass = 0.0;

// Check to see if any constraints have been removed
foreach(int PreviousPieceIndex; i[]@aConnectedPieces)
{
	int found = 0;

	// Search current array against last, etc
	foreach(int CurrentPieceIndex; ConnectedPieces)
	{
		if (CurrentPieceIndex == PreviousPieceIndex)
		{
			found = 1;
			break;
		}
	}

	// For every broken constraint, add some dust    
	if (found == 0)
	{
		RemovedPieces[len(RemovedPieces)] = PreviousPieceIndex;

		int Success;
		RemovedPieceMass = RemovedPieceMass + pointattrib(GeoPath, "mass", PreviousPieceIndex, Success);
	}
}

/*
* Increase the dust amount if we have removed some pieces and use the mass
* of those pieces to scale how much dust is generated
*/
if (RemovedPieceMass > 0.0)
{
	@DustAmount = @DustAmount + (DustPerKilo * RemovedPieceMass);
}

f@VelocityMag = length(@v);

// Disperse the freed up dust a little each frame
if (@DustLiberated > 0.0) @DustLiberated = max(@DustLiberated - LiberatedDustDissipationRate, 0.0);

/*
* Based off the speed of this piece, transfer some
* of the Dust to "liberated".
* This allows the dust to be used up over a number
* of frames, faster for fast moving pieces
*/
float VelocityMultiplier = (f@VelocityMag - MinimumSpeedForDustLiberation) / (MaximumSpeedForDustLiberation - MinimumSpeedForDustLiberation);
VelocityMultiplier = clamp(VelocityMultiplier, 0.0, 1.0);

float DustAmountToLiberate = VelocityMultiplier * DustLiberatedPerMetrePerSecond;
DustAmountToLiberate = min(DustAmountToLiberate, @DustAmount);
@DustLiberated = @DustLiberated + DustAmountToLiberate;
@DustAmount = @DustAmount - DustAmountToLiberate;

addvariablename(geoself(), "DustLiberated", "DUSTLIBERATED");

// Store the connected pieces as an attribute (used when comparing between frames)
i[]@aConnectedPieces = ConnectedPieces;

The first loop is pretty ugly to look at. It used to be two separate loops with a bunch of copy-pasted code, not sure it’s any better now that I “cleaned” it up.

Anyway, this code searches through all the Constraints in the scene, and finds any constraint that is connected to the current piece

For each Constraint it finds, it keeps track of the piece of geometry that this one is connected to, and puts it into a “connected pieces” array.

The “connected pieces” array is stored on the geometry as an attribute. Each frame the sim runs, you have access to the previous attribute values in this Geometry Wrangle.
If a piece was connected last frame, but not this frame I use the mass of the no longer connected piece to add a “DustAmount” to our current piece.

Each frame I transfer a bit of the DustAmount (if there is any) to “DustAmountLiberated” based on the velocity of the piece. This “DustAmountLiberated” is what I’m using to create the smoke density.

Phew! So not exactly neat code, sorry about that, but hopefully that makes sense.

Remove Broken solver

CollapseSim_RemoveBrokenNetwork

Nothing very exciting here, but each frame I have a sphere that expands that deletes constraint primitives.
It leaves the points alone, because I still need to look up the points for constraints that have been broken, so keeping the points makes life easier 🙂

After a bunch of frames, it looks like a packman cylinder. I think that warrants a screenshot:

Constraints

DeletePrimitivesBasedOnPoints

This is another attribute wrangle, which checks to see if the constraint is marked for delete, or if either point in the constraint is marked for delete (by the big sphere of death).
If any of that is true, the whole primitive if marker with the “ToDelete” attribute.


int Point1 = primpoints(0, @primnum)[0];
int Point2 = primpoints(0, @primnum)[1];

i@Point1Delete = point(0, "StuffToDelete", Point1);
i@Point2Delete = point(0, "StuffToDelete", Point2);

i@ToDelete = (i@ToDelete || i@Point1Delete || i@Point2Delete);

Smoke Source

SmokeSourceNetwork

This network imports the results of the Collapse sim, so that it can generate the fields that I need to pass to the Smoke Simulation.
I mentioned that I’m using heat, density and velocity, but I’m actually just using the density as heat. That makes no sense, but I didn’t bother coming up with a better plan 🙂

Anyway, the Density is generated just from chunks of geo with “liberated dust” amounts.
The Velocity is generated from all geometry pieces:

SmokeSourceParts

The Velocity field is kinda cute.

SmokeVelocityPreview

Network wise, there’s not a lot fancy here. A little bit of hackery to avoid errors on the first frame, because the DustLiberated attribute doesn’t exist at that time (hence the switch node, which just uses a condition of “if we are on the first frame do X”, where X is ignore all the geo).

Probably worth noting that for the density, I’m using points scattered on the surface of the geometry, but I’m deleting the exterior faces, because they are never connected to anything 🙂

Smoke Sim

SmokeSimNetwork

Nothing very exciting here either, pretty much a standard pyro shelf setup with a wind node thrown in.
I also added a switch node so I could quickly change between a few fluid grid setups for quick previews.

Well that was fun!

So that’s it! Sorry it was a bit of a wall of text.

This was a fun exercise, although it took me a long time to sort this all out, it really helped me learn more about pyro sims, and Houdini in general.

Aside from making an actual scene to destroy, creating particles for the dust, and tweaking the fluid sim settings to make it better, there’s a few things I thought of half way through this that I’d like to try:

  • Use surface area instead of mass to drive the amount of dust.
  • Combined with the above, instead of generating the dust all over the piece of geometry, I could convert the two pieces of geo to volumes, intersect those volumes and generate the dust only on the intersecting places.
  • The turbulence looks horrible. Yuck. It looks like the smoke is wriggling about in jelly (or jello for those living in America)
  • I think I could probably move the dust calculations into SmokeSource. Currently, if I want to tweak dust amounts, I need to re-sim just about anything, which is annoying.
  • Non linear reduction of dust amount might be nice, sometimes the dust cutoff is a bit sudden

So… Are you still doing Unreal and Half Life inspired stuff, or did you just get bored and wander off?…

Yeah. Well.
So I was intending to use Houdini to do a bunch of stuff for that, but we’ll see 🙂

The first thing I started trying out (when I had no idea what I was doing) was smashing up my chamber:

Orange is the new “MetalWearDirtDentMaterialInstance_01_a”

Naming conventions, ha!
That’s the beauty of home projects, I can get away with it…

Annnnyway, just a quick update this time.
I finally got around to unwrapping the floor panels and rails in the scene. In doing so, I think I’ll need to do a fair bit more work on where the panels join the rails (rubber strips or something, maybe). Also, the outer panels are way too big, coming in at about 5 and a half Freemans, so I’ll need to break the up / re-design them.

Still, it’s given me the fun opportunity to take my dirt metal material, and re-use it on a few new material, so I now have some shiny rails and floor panels:

UE4 orange metal panels

UE4 orange metal panels

Still plugging away

Still working my way through learning more about Modo. I’ve been having a lot of fun with duplicating along curves, beziers, etc, at work. Still really miss having a Stack like in Max, though, and of course nothing really compares to Max’s path deforming, lofting, etc. Well, nothing I’ve found yet, anyway, there’s pleny of software out there 🙂

I’ve started trying to teach myself 3d sculpting, too, although I think that will be a long long term goal. Picked up a great sculpting book at the Weta workshop a while back: “Mastering Portraiture: Advanced Analyses of the Face Sculpted in Clay”. It is quite good, very in depth.

If I do anything that doesn’t look like a lumpy ugly mess, I’ll post it up 🙂