Aug 31, 2010

Eye rendering - first try

I was in holidays near Deauville (France) last week, that's why I remains quiet.

I have started working on an eye rendering method inspired by the one presented in this paper. As you can see on the next screen-shots, I have some interesting first results. The characteristics of the eye is taken from biologists reports papers. The iris is rendering using the subsurface texture mapping method simulating single scattering in participating medium (in this case, the iris). The light refraction at the cornea is taken into account in a different way than what is done in the eye paper (their refraction function). The sclera (the white part) is currently not shaded.

I hope you like it!



Blue iris rendering with refraction.
(click on the picture for higher resolution)


Green and Brown eyes.

I have set-up the scattering/extinction parameters rapidly so you may not found the color of this eyes really "realistic"! I still have to improve the current algorithm and I have some interesting idea for that.

Aug 13, 2010

Quackecon 2010: Carmack's keynote

Every developers familiar with virtual texturing would have though: "this is a real scalable technology"! Carmack just proved it with his iPhone4 Rage demo running at 60fps!

By the way, I am looking for a full video of the keynote. I, I mean google, can't find it...

Also, John now has a twitter which would work as his old .plan files! Back to good old time?

Aug 12, 2010

BFN: about Crytek's approach

In my case, I have computed the best fist RGB value given a bias vector and, of course, the direction to fit. The BFN are stored in a cube map.

Crytek's approach is only storing the length of the best fitting vector for the requested normal direction. Furthermore, as explained in the notes (slide 94), 8-bits precision is sufficient if the best fit length is stored for the normal divided by maximum component.

As pointed out by Vince on my first post about BFN, there is some symmetry on each cube face. Indeed, the cubemap can be compressed into a single 2D texture (slide 94). This allows to save video memory at the cost of several ALU operations in the fragment shader (slide 95).

When using this final representation as a 2D texture, it is not possible to change the bias vector as I have proposed in my previous post. However, results seems to be good enough with Crytek's approach... (slides 42-43) Is it worth the cost to use a cubemap? My next step will be to compare the image quality with or without changing the bias vector in my deferred relief mapped renderer.

Aug 8, 2010

Best Fit Normals: playing with the bias vector

Yesterday night, I have first implemented the Amantide ray marching method to compute the best fit normal (BFN) cube map. The gain in performance is, as expected, major. I can now generate a large cube map in few minutes (instead of hours).

Second, I was thinking about changing the bias vector in order to get precision where it matter, similarly to the quantization approach presented by Crytek in this old presentation slide 13. This, in order to get more precision (more direction) in the usual normal direction: along the Z axis. I found that it is hard to choose a good bias value. In the case where negative Z value are ignored (there is some games engine in this case but I can't remember which), choosing a bias of 0.0 would be just perfect. Then, I suppose one could choose the bias value visually, or image difference error from key view points. Also, encoding objects normal map using such a bias should be of higher quality because normal maps rarely contain normals with negative Z.

Enough talking, here are some screenshot:


A view of the BFN cube map face (positive Z)
with a bias factor of 16 on the Z axis.


Bias modification example. Left: BFN with usual bias,
Right: BFN with Bias of 2 on the Z axis
(poor precision for normal with negative Z).
(32*32 cube map)


Visualization of the reconstruction error (multiplied here by 70)
as compared to the true direction.
Front box: BFN with Bias of 16 on the Z axis
(poor precision for normal with negative Z),
Far box
: BFN with usual bias. (256*256 cube map)

Visualization of the reconstruction error on the positive Z face.
Left: BFN with Bias of 16 on the Z axis,
Right: BFN with usual bias. (256*256 cube map)

A you can see on the last figure, you can get more precision on a simple 256 BFN cube map by simply changing the bias on the Z axis.

The presentation of Anton about this is now available on the course website. I will have a deeper look at the details (I was implementing this methods based only on my memory and ideas since SIGGRAPH). Interestingly, it seems that they do not use a cubemap but a 2D texture...

As always, feel free to discuss and to ask me for the BFN cubemap textures (at any resolution now :D) if you want to try this in your engine (some developers from video game studios requested these textures last time).

Aug 4, 2010

Crytek's Best Fit Normals

Among the SIGGRAPH presentations, there was one about Crytek's rendering methods. One interesting techniques quickly presented was best fit normals (BFN). This methods is aimed to improve normal precision when stored in the RGB8 format.

When using traditional scaled&biased normals in RGB8 format, some accuracy errors can occur because of the low precision of the RGB8 format related to the scale&bias. For example, considering a 256*256*6 cube map, 393216 directions can be represented. However, due to the low precision of RGB8 format, only 219890 (55.9%) of these directions are effectively represented (many similar directions being represented by the same compressed value). In this case, I have computed that only 1.31% of the full 256*256*256 voxel possibilities of the RGB8 values are used. Also, I have computed that each voxel effectively used represents meanly 1.788 directions.

The idea behind the BFN approach is to search for the voxel that will best represent (fit) a given direction: it may be a non-normalized vector. Using this method with a 256*256*6 cube map, I have found that 387107 (98.4%) of directions are effectively represented. Furthermore, in this case, each voxel used represents meanly 1.016 directions. Thus, using such a method results in a more accurate reconstruction of normals (see screen-shots). Moreover, compression is a single cubemap lookup, and reconstruction is an unbias and normalize.



Left: BFN cubemap, Right: scale&bias cubemap


Reconstruction error (absolute value scaled by 70) for BFN (left) and
scale&bias normal (right)

So how to generate each cube face? Currrently, I am using the brut force method which is horribly slow: for each direction on the cube face, I parse each voxel of the RGB8 volume to search for the one which match the best. One faster method I plan to implement later is to use the Amantide ray marching method to ray march the voxel volume along the ray direction and find the best representative one.

How can this method be used?
  • Better normal map encoding: when computing object normal map, instead of converting from floating point normal map to scaled&biased normal, do a texture look up in the BFN cube map texture.
  • Deferred rendering: high quality normal buffer in RGB8! :) It could also be possible to pack a normal in a 32F channel.
  • Any other ideas?
So when I will have time, don't know when because the end of my PhD is approaching quickly, I plan to implement Amantide's methods to accelerate the computation. Then, maybe use a better representation instead of a cube map.

If you have questions or want access to the cubemap textures, send me an email. As always, feel free to discuss here about this method.

Aug 1, 2010

Back from SIGGRAPH 2010

Wow, SIGGRAPH was awesome!

I saw and learn so many things that I can't list them here. All I can say is that it was nice attending movies (Weta, Pixar, ILM, Dreamworks, etc), games (Crytek, Dice, Valve, etc) as well as researchers presentations. Now I can put a face on those names I often encounter when reading research papers or presentations.

Here are some links with very interesting courses and presentations about graphics:
  • Dice presentations
  • Advances in real-time 3D rendering course
  • Beyond programmable shading course. (This year theme was really oriented towards future hardware evolution)
  • New: Physically Based Shading Models course
  • New: Stylized Rendering in Games course
  • New: GI across Industries courses