When thinking of how to get these results with conventional raster it dawned upon me... photon mapping compared to the actual ray tracing consumes soo little CPU power that even with many thousands of rays, it can be done in realtime, even without GPU acceleration.
In this demo 1000 light rays are being cast and bounce in realtime, rather inefficiently, then blended to look like there's more. Click the third icon to see photon mapping only, imagine merging raster with photon instead ray with photon, you'd get the same results but at 300 frames per second with software photon, 3000 fps with gpu accelerated photon.
The idea, no more CaLight(as it currently is), no more lightmaps, no more long compile times waiting for things to render, realtime lighting with results superior to CaLight and any other game engine, even comparable to "low-quality" raytracing.
CaLight could be redubbed CaRay because it's new role will be a form of Photon Mapping.
Graphics will be a harmonization of raster and ray, unlike everyone else out there who strictly resides on their own side of the fence.
Every light has 6 variables:
R, G, B - Integers for colour and brightness.
Rays - Number of rays light casts, likely have a dropdown list of standard quantities.
Range - Cafu units, overrides RGB intensity to keep light from casting very far, eating a lot of power.
Bounce - Number of times rays can bounce, 0-4, default 1.
There would be 3 types of light:
Panel - Parallel ray casting sheet, has an extra diffuse value so rays come from a more randomized factor, can be used for sunlight.
Point - Standard omnidirectional light.
Cone - Spotlight, has angle values.
- A light which is not in motion does not consume any processing power.
- A light can change it's colour and intensity with minimal performance hit since rays don't need to be recast, the point values only need to be changed.
- When an object is in motion being lit by rays, it's bounding box can be used to predict which rays need to be recalculated, dramatically increasing efficiency.
- Even if every light is in motion and the rays have to be recalculated every frame, the amount of computing power available in even low-end systems is great enough to have thousands of rays active.
- The slow movement of the sun allows it's ray calculations can be done at a lower priority.
- LoD is easily implemented with Rays, the further away you are, the fewer a light source has.
- Full HDR is more practical, HDR lightmaps have a massive footprint.
Methodology idea - Theoretically photon mapping could be purely for bounce lighting, saving the computational power of the initial rays and directing focus towards on global illumination, however then you'd loose the ability to have smooth shadows.
- To save power photon mapping could be configured to ignore flagged models, things which would be moving a lot or constantly or of high detail, those things would still be rendered in steps 1 and 3, but ignored by the photon casting(2) and illuminated by simply sampling local photons.
- To save power the photon mapping could be run on simpler meshes, by using the second highest LoD mesh, although the photon points would have to know where to parent to the more detailed meshes for a quality render(blending of 1&2).
Thank you very much for that link!HWGuy wrote:In this demo 1000 light rays are being cast and bounce in realtime, rather inefficiently, then blended to look like there's more. Click the third icon to photon mapping only, imagine merging raster with photon instead ray with photon, you'd get the same results but at 300 frames per second with software photon, 3000 fps with gpu accelerated photon.
I certainly made sure to bookmark it, and - can you believe it? - I even own the book by Henrik Wann Jensen.
Some points come to mind:
CaLight is missing a feature to provide lighting information for entities that move through the (lightmapped) world. That is, at this time, when a player moves through the world, we try to find a lightmap element in the world somewhere "near" the player, and use its color as the ambient color for the whole player entity.
That's incredibly clumsy, and often leads to less than optimal results.
It seems that I never got around to dig into and fully understand photon mapping -- last time I started, I left again in the believe that it is suitable only for global illumination in static images, i.e. as an extension and improvement to conventional ray-tracing.
I'll make sure to (re-)read the book, and study the source code from the above website!
Still not sure if it will be possible to use it for real-time lighting (that would be ideal), or (less ideal, but still a huge improvement) to put CaLight on steroids, have it compute the static lightmaps in seconds or minutes rather than in hours and days.
There are existing libraries that (seem to) implement real-time global illumination, see e.g.
I don't know if they use or don't use Photon Mapping though (possibly a variant of it).
The lighting in that demo doesn't look very impressive, just an amalgam of stock features, not very soft and the smoothed shadows have artifacts.Carsten wrote:There are existing libraries that (seem to) implement real-time global illumination, see e.g.
I don't know if they use or don't use Photon Mapping though (possibly a variant of it).
Hard to tell what GI method is used because it's proprietary, I can't even find a paper on their method, it's definitely the best part of the thing though, if I had to guess it's sampling the things being lit and then using a matrix of non-shadow casting lights of limited range in lighting the scene, or the light being cast is giving off some rays which picks then gives off non shadow casting lights.
Photon mapping provides a GI source by sampling the points, or it does it directly, looks more natural, although messy with low ray counts.
While it seems easy to create photon maps, and the way how various lighting effects are modeled (in combination with the Russian Roulette technique) is certainly very appealing.
But note how all these nice demo images that you see with Photon Mapping are the result of using the photon map in combination with a ray tracer:
The ray tracer is used in order to model specular reflections, for caustics (using a second, separate photon map), for the direct lighting, and even for the final gathering step of the indirect illumination.
(All this is also quite well outlined in the Advanced Global Illumination book.)
I don't quite see how we should be able to implement this in real-time.
Real-time ray-tracing is an active topic of research, but it currently is not quite in the mainstream yet (and maybe in the future, but not quite today, a serious alternative to our current renderers).
Alternatively, as a compromise solution, it would certainly be possible to use Photon Mapping to derive classic lightmaps from the resulting photon map. (This is what I had in mind also in my prior post above.)
However, compared with the existing Radiosity implementation in CaLight, this would probably not increase the image quality at the same compile time, or decrease the compile time at comparable image quality.
Btw., the state of the art of real-time lighting seems to be Enlighten, see http://www.geomerics.com for details.
Unless someone has an idea how Enlighten does internally work, for us a more self-suggestion course of action would be to introduce massive parallel computing to CaLight.
The algorithms in CaLight is especially suitable for parallelization, and it would be (either for myself or anyone else who is interested) a great and interesting task to implement it.
In fact they do tell how they're doing it, see
http://www.geomerics.com/downloads/radi ... ecture.pdf
They're just using a fixed point matrix(backwards photon mapping, with regularly distributed points) for sampling and casting with a series of lights("spherical harmonic probes"), says it used visleafing to accelerate things.
It's results are sent to a lightmap in realtime and merged with the direct lighting.
Since they're using fixed points there's less computational power being eaten up by dynamic rays... although dynamic only consumes more power when in motion.
It's a simple case of points in direct light emit, points not in direct light check if they're are within line-of-sight of the ones which do... well, as far as I can tell from the simple document.
EDIT: Ah, you found another.
Point density could be very LoD and graphics settings friendly since it could be dynamic.
Edit: Hmm, the implementation I had in mind to save power doesn't look as good as plain photon mapping.
Perhaps it's better to stick to regular photon mapping and making it as efficient as possible, using octree picking the number of checks a ray would have to do would be dramatically reduced, also calculating rays in groups in unison with the octree system.
I know several ways out of the world of trades, don't see them online, but perhaps you're already using/know a faster one.
I know calculating the exact location a hit occurs is a bit complicated, but detection is a large part of the battle, if the scene is chopped into an octree and poly detection is very simple... that accounts for a monstrous amount of power saved in ray tracing.
See the command-line options of CaLight and/or the source code for details.
Of course it is possible to further improve CaLight in many ways: algorithmically, regarding performance profiling, and all that. But to actually change something, we need thoroughly founded and concrete ideas or papers or code or anything else we can create a program from.
When tracing rays do you hit check then find the point of intersection, or do you run a line-triangle intersection algorithm off the bat every time?
Do you use leafing to check if a ray could possible even hit the polygons?
Do you normal check to see if the face is pointed towards the ray source?
With the world chopped into cubes to batch up polygons so you can check what group of polygons the ray points cheaply and cheap ray hit detection, the performance from ray casting is really fast, since then you can use a simpler equation to determine where the ray intersects. Very parallel friendly, no wasting power on misses.
CaLight taking 5 hours to compile a map that some engines run lights on with results just as good in realtime, at 24 fps that's 432'000(lies, a lot isn't done frame-by-frame) times faster... simply making CaLight multithreaded and tweaking it isn't going to improve it by a large enough factor to make it competitive.
- Dump tone mapping in CaLight, leave that to the renderer for HDR.
- All geometry has blank lightmaps stuck onto them when loaded, CaLight can be used to make these.
- Have the lightmap resolution based on an area/detail factor based on user graphics settings.
- Generate the lightmap details in realtime, renderer does direct lighting, CaLight does photon mapping through software and/or GPU acceleration.
- The renderer then blends it's own results(direct) with CaLight's lightmap.
To make ray casting faster you reduce the amount of polygons being sampled, bunch of ways of doing this including the existing visibility system(if it doesn't group things in weird shapes and with large numbers of polys in the chunks), although with a large and dynamic world a static vis system isn't practical. A super simple cube-chunk world with occlusion planes could get the job done, and to further speed things up have a simple collision check before determining where the hit took place to then blend into the map.
http://blog.wolfire.com/2011/03/GDC-ses ... -Radiosity
Note how they cleverly combine some cool ideas:
- Render direct lighting directly, e.g. using shadow maps or stencil shadows.
In lightmaps, only store indirect lighting.
This way, it is possible to get away with very low-res lightmaps.
Direct lighting remains "dynamic", and "sharp" (not blurred from lightmaps)
- Use "directional irradiance" to have specular effect with lightmaps -- hey, we already have that in CaLight, too!
- (If I understood correctly) For each lightmap element, precompute a list of other lightmap elements that are affected (and how much (the weight)) by incoming light to this lightmap element. This makes propagating (bounding) light really easy!
(It may require some tricks to keep the lists small enough.)
- Distribute bounce passes over frames.
- Probably more that in my quick glance I've overlooked.
Heres a few links
http://www.gamedev.net/topic/546179-scr ... radiosity/
http://www.gamedev.net/topic/517130-my- ... some-help/
thanks for the links!
I'm a bit of a skeptic regarding screen space methods, especially when it seems like a problem that belongs in world space is "forced" into screen space. And at a first glace at the Ogre and GameDev links, it indeed seems like they're just cleverly blurring things, missing e.g. light sources and reflections of surfaces that are backside or invisible to the viewer.
But alas, I've not yet read the paper by Ritschel et al., and if I can learn something new and change my mind, all the better. (I wrote my diploma thesis at the chair of Prof. Seidel btw. )
Although, being so simple I wouldn't scratch it out yet because it could be a fallback method.
But what I really like is the idea to use lightmaps only for indirect light (a purely optional feature, as we can still use it for directional light as well, as-is now), and, independently of that, "replacing" them with a spatial grid of light probes.
The latter would be a regular 3D grid of points (stored e.g. as a 3D texture), where each point stores the direction and amount of light "passing through" it. The information at each point might be expressed as a spherical (harmonics) function, or in a simplified version just as a vector, expressing the "average" or the "main" component.
This data would then be used at render time in a pixel shader to add indirect light to all objects in the world, especially moving ones such as monsters and players. (This is what our current implementation is really not good at and what would be elegantly solved by this system.)
Even better, we could also use the same approach with static objects, such as small map details. Such small map detail objects can work with lightmaps as well (as is now), but using the light probes from the 3D grid would make things simpler and more uniform.
Finally, it may turn out that this approach is feasible even with large, flat walls, eliminating the need for classic lightmaps entirely.
All this is happens to be even more attractive because speed vs. quality would be very scalable, and especially because everything is entirely optional, e.g. we can easily implement it besides the existing technology without being forced to remove one of the established and proven ones in its favor.
Users browsing this forum: No registered users and 1 guest