Physically Based Rendering, and some more sword

Just a quick stream of consciousness type post today, after having a rather great thanksgiving long weekend up on Amherst island with good friends πŸ™‚
So, prepare for a giant wall of text!

Physically Based Rendering

There’s a lot of really interesting work going on in the last few years with regards to Physically Based Rendering (and you’ll hear about it *everywhere* in the games industry at the moment).

It’s just about my least favourite term for what amounts to energy conservation, and better BRDF models, it’s up there with “biased” and “non-biased” rendering. But whatever, I won’t go into my grumbles πŸ™‚

Here’s some smart people talking more about it:

http://graphics.pixar.com/library/PhysicallyBasedLighting/paper.pdf

Anyway, with all this chatter, it reminded me of a really great bit of research I was shown years ago.
The paper was sent my way by a good buddy Mark Flanagan, but didn’t grab my attention the way it should have.

Lighting in RGB, and why it’s lame

http://www.fourmilab.ch/documents/specrend/

^ That’s not the paper I saw, way back when, but it’s the same idea and well presented.

The gist of the paper was that our underlying model of light interaction is very inaccurate, because we essentially oversimplify a full spectrum of light colour into 3 wavelengths: red, green and blue.

Imagine taking an orange and purple object into a room that is lit with purely red light.
By orange and purple, I mean objects that are reflecting very specific wavelengths of light (let’s say 390-460 nm for the purple object, and 580-620 nm for the orange object). The red light is emitting at 630-700 nm.

Visible light spectrum

I’m deliberately picking values that don’t overlap in the visible light spectrum, because the result should be that that the two objects go black. Both objects would absorb the red light, because it is not one of the wavelengths that they reflect.

(There’s a similar experiment at the top of this paper, using cellophane).

Results in CG

Now go try that in your physically accurate renderer!

Since the lighting calculations are happening in RGB space, and “orange” and “purple” are a combination of red and green, and red and blue (respectively), you’ll see that your objects are brightly lit, which is completely incorrect.

Ok, sure, this is a fairly contrived example, but it illustrates the point.

Ok, so… That’s nice, you can read people’s websites…

Right πŸ™‚
I intend to make a little photoshop-like program that simulates proper spectrum based lighting, but we’ll see if I get there! There’s lots of blogs and code samples around to get me started.

I could author full spectrum data per-pixel for textures, and store a float value per 5-10 nm band, per pixel.
Going from 3 channel RGB up to ~37+ channels seems like a rather large jump in data size, though πŸ™‚

This all boils down to a data compression issue, and data compression isn’t my strong area.

Instead, my current plan is to create a limited number of albedo layer masks, and each layer gets it’s own spectrum.
So I might create a “orange plastic” layer, then create a “blue plastic” layer on top with a mask.
I’d create a per-layer spectrum, rather than per-pixel, and it would be stored as a 2d float texture.

Hopefully, along with normals and roughness maps, the lack of variation in reflectance wouldn’t be a huge deal.
I’ve seen some pretty flat albedo textures, so fingers crossed πŸ™‚

Is this really practical?

Well, light spectrums for common light sources are pretty well known and documented.
Here’s a page with some spectrums measured from a custom built spectrometer:

http://jandmworks.com/light.html

Materials are a little trickier, because you’d need to measure the reflectance as a full spectrum, you can’t just go out and photograph stuff with a regular camera.
What you’d want is a full Spectroscopy camera, something along the lines of this:

http://www.princetoninstruments.com/products/imcam/proem/default.aspx

I’m sure it’s not cheap πŸ™‚

After that, you still have to deal with the same sorts of issues that are being encountered by studios who are making sure that all textures are photo-sourced, calibrated, etc.
Some studios are already using scanners to collect surface detail for materials, for example, so maybe that would be another good way to go.

If photorealism is the goal, it should be pretty difficult to have a serious discussion about it without considering how limited the RGB model is for dealing with lighting interaction.

What next

Whatever I get done, I’ll post it here.
I’m not sure how far I want to go with this right now, especially since I’m having a lot of fun making 3d stuff πŸ™‚

Uh-huh. Have you fixed your sword yet?

No, but here’s a render with some better materials on it πŸ™‚

I plan to wrap something around the handle, because I’m still not happy with it, but I’m getting pretty happy with the rest of it now!

Sword

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: