Renderers and Meshes

Get technical support about the C++ source code and about Lua scripts for maps, entities, GUIs, the console, materials, etc. Also covered are the Cafu libraries and APIs, as well as compiling, linking, and the build system.
User avatar
Carsten
Site Admin
Posts:2170
Joined:2004-08-19, 13:46
Location:Germany
Contact:
Re: Renderers and Meshes

Post by Carsten » 2011-11-13, 16:32

Hmmm, yes. I looked at GLEW and GL3W, but both still seem to have inherent problems:
GL3W can only deal with the core profile functions,
GLEW seems to have bugs/issues (at least as indicated in this thread and this ticket).

GL3W looks attractive because it's so tiny (the Python script), but using GLEW is probably more flexible and universal.
Best regards,
Carsten
thomasfn
Posts:14
Joined:2011-11-07, 22:48

Re: Renderers and Meshes

Post by thomasfn » 2011-11-18, 02:25

Sorry I haven't replied in a while, been busy doing various things.

I'm currently reverting my version back to the latest rev on the svn, and starting fresh (more or less, I'm keeping what work I've done on the renderer though). I've also reverted glext.h and the existing OpenGLEx stuff too, I'm going to try and work from GL3W from now on (or I might change to GLEW, depending how that goes).

I'll also have a stab at the new mesh code also.

Man, this update is taking a while to compile! Did you edit some kind of major header or something :cheesy:
User avatar
Carsten
Site Admin
Posts:2170
Joined:2004-08-19, 13:46
Location:Germany
Contact:

Re: Renderers and Meshes

Post by Carsten » 2011-11-18, 21:39

thomasfn wrote:Sorry I haven't replied in a while, been busy doing various things.
No problem, I've been busy as well. :-)
I'm currently reverting my version back to the latest rev on the svn, and starting fresh (more or less, I'm keeping what work I've done on the renderer though). I've also reverted glext.h and the existing OpenGLEx stuff too, I'm going to try and work from GL3W from now on (or I might change to GLEW, depending how that goes).
If possible, please prefer / try first GLEW.
(The intention is to use GLEW at a later time to replace the old OpenGLEx code as well...)
Man, this update is taking a while to compile! Did you edit some kind of major header or something :cheesy:
Yes, this change triggered a recompile of everything. Changes like these are generally rare though, but sometimes cannot be avoided. Sorry! ;-)
Best regards,
Carsten
thomasfn
Posts:14
Joined:2011-11-07, 22:48

Re: Renderers and Meshes

Post by thomasfn » 2011-11-19, 00:23

Well I implemented GL3W just before you made that post. But I've made my renderer completely independent of anything inside "Common" - so it works entirely off GL3W now. The GL3W stuff also acts independently, so it doesn't affect any other renderers. This is how the renderers should work in my opinion, they should load their own extensions and stuff - any "helper" libraries such as GLEW should be included as a part of the renderer, not the engine (like OpenGLEx and OpenGLState is). Besides, GL3W does expose a function that allows me to load any extension even if it isn't part of the core profile. In any case, I don't think I'll need anything not part of the core profile anyway - it contains VBOs and GLSL shaders, which is all that is needed. Texture loading has remained mostly the same throughout the lifetime of OpenGL. So I think I'll stick with GL3W for now.

Here are my current edits, excluding the GL3 renderer itself. I'd compile it as a SVN patch but there are other little edits here and there I'd rather leave out, and I'm not all that familiar with the SVN patch system anyway. I've tried to stick to your coding style and the style of Cafu as close as possible, but I'm sure you won't mind tweaking things to your satisfaction :wink:

Changes
MatSys/Renderer.hpp
Addition of following code to the renderer class:

Code: Select all

virtual MatSys::UnivMeshT* CreateMesh(MatSys::UnivMeshT::WindingT Winding)=0;
virtual void DeleteMesh(MatSys::UnivMeshT* mesh)=0;
Thus, all renderers will need to implement these functions. Since all the current renderers get along fine without needing to implement their own mesh objects, they can use the following default implementations:

Code: Select all

MatSys::UnivMeshT* Renderer::CreateMesh(MatSys::UnivMeshT::WindingT Winding)
{
    return new MatSys::UnivMeshT(Winding);
}
void Renderer::DeleteMesh(MatSys::UnivMeshT* mesh)
{
    delete mesh;
}
Also, the existing RenderMesh function should be changed to accept UnivMeshT instead of MeshT.

Additions
MatSys/UnivMesh.hpp
http://pastebin.com/Hg85hSGZ

MatSys/UnivMesh.cpp
http://pastebin.com/pY8xjMnN

It might be needed to expose Vertices as public rather than protected, so the good old immediate mode renderers can work right from the Vertices array (like it does now). Or maybe a getter function can be added or something.

These incorporate the changes we've discussed, as well as a few additions which I'll explain now.

Basically, a mesh might be given a number of different "channels" of data which differ from other meshes. For example, a certain mesh (say, player model) might have position data, normal data and UV data. Another mesh (say, world terrain) might have position data, lightmap UV data, UV data but no normals. Obviously, it'd be wasteful to blindly commit every channel a mesh might have to GPU memory - the world terrain mesh would end up with an empty normal VBO which doesn't get used - what a waste! So there needs to be a way to identify on a per-mesh basis (I know materials have something for this) what data to commit to the GPU. This is where RequireAttribute comes in.

Here, I'll show an example in code. This is what the user code would look like for loading a mesh from file.

Code: Select all

UnivMeshT* mesh = MyRenderer->CreateMesh(UnivMeshT::WindingT::CW);
mesh->RequireAttribute(UnivMeshT::AttributeT::ORIGIN, true);
mesh->RequireAttribute(UnivMeshT::AttributeT::NORMAL, true);
mesh->RequireAttribute(UnivMeshT::AttributeT::TEXCOORD, true);
mesh->RequireAttribute(UnivMeshT::AttributeT::COLOR, false);
mesh->RequireAttribute(UnivMeshT::AttributeT::LIGHTMAPCOORD, false);
mesh->Init();

... loop through each vertex in the file ...
mesh->AddVertex(vertex);
...

mesh->Update();
The RequireAttribute calls with "false" as the second parameter aren't required, as by default all attributes are set as false, but I included them for completeness. This way, the renderer implementations of the mesh know exactly what data to commit to the GPU.

Rendering a mesh would work in the same way as it does currently, at least as far as user code -> renderer interaction goes. Having the renderer keep a list of meshes and maintaining control over them will most ideal, but as you said, one small step at a time.

I've already written an OpenGL 3 implementation of the mesh class. It's bound to have bugs since it's not tested, but it compiles, and you get the general idea of what's going on.

glMesh.hpp: http://pastebin.com/BLH3C8bm
glMesh.cpp: http://pastebin.com/pGAHnBPd

It's probably not the most elegant solution (especially the const attribute arrays in the cpp, that's a little... dodgy) but it's quite flexible, and probably the fastest solution.
User avatar
Carsten
Site Admin
Posts:2170
Joined:2004-08-19, 13:46
Location:Germany
Contact:

Re: Renderers and Meshes

Post by Carsten » 2011-11-20, 21:11

Hi Thomas,

many thanks for your detailed post! I've looked at your code, and I have a number of questions:

About the renderers being independent:
Well, sure, but they are still allowed to (re-)use anything in Libs/or ExtLibs/, because the code in there is supposed to be "generic", and is not part of the engine. Said differently, what makes you think that OpenGLEx and OpenGLState are part of the engine? (they aren't)

Using GL3W for now is ok, but as GLEW could also be used for the old OpenGL "below 3.0" renderers. I might soon try to switch some of them to GLEW, or at least test if that is reasonably possible.

UnivMeshT seems to have no TypeT member? (Points, Lines, LineStrip, LineLoop, Triangles, ...)

Why do we need methods Init() and Update() in the public portion of UnivMeshT?
Can't we just hide them in the implementation, e.g. have the implementation check on first use (in Render()) if the mesh has been inited already, and if not, do so?

Your explanations about channels made me wonder if we can move away from the horrible VertexT struct in the meshes. Instead, we could have one separate ArrayT<...> for each channel. The user would use it like this:

Code: Select all

for all my vertices
{
    // VertexNr is the index number of the current vertex.
    my_mesh->SetOrigin(VertexNr, Origin);
    my_mesh->SetColor(VertexNr, r, g, b);
    ...
}
Here is some pseudo could that outlines the general idea:

Code: Select all

// This is PSEUDO-code: Not tested, not compiled.

namespace MatSys
{
    // A suggestion for a UnivMeshT class design.
    // The key idea is that the user must specify a "proper" mesh:
    // If he wants to use colors, he has to make sure that there are as many colors are there are origins;
    // if he wants to use uv's, he has to make sure that there are as many us-pairs are there are origins;
    // etc.
    // The implementation derives everything else automatically.
    class UnivMeshT
    {
        public:

        enum TypeT    { Points, Lines, LineStrip, LineLoop, Triangles, TriangleStrip, TriangleFan, Quads, QuadStrip, Polygon };
        enum WindingT { CW, CCW };
        typedef float Vec2T[2];
        typedef float Vec4T[4];


        /// Constructor.
        UnivMeshT(TypeT T, WindingT W=CW);

        void SetOrigin(unsigned int VertexNr, const Vector3fT& pos, float w=1.0f);
        void SetColor(unsigned int VertexNr, const Vector3fT& col);
        // ...


        private:

        const TypeT       m_Type;
        const WindingT    m_Winding;    ///< The orientation (cw or ccw) of front faces.
        bool              m_IsInited;
        bool              m_IsModified;

        ArrayT<Vec4T>     m_Origins;
        ArrayT<Vector3fT> m_Colors;
        ArrayT<Vec2T>     m_TexCoords;
        ArrayT<Vec2T>     m_LM_Coords;
        ArrayT<Vec2T>     m_SHL_Coords;
        ArrayT<Vector3fT> m_Normals;
        ArrayT<Vector3fT> m_Tangents;
        ArrayT<Vector3fT> m_BiNormals;
    };
}


void MatSys::UnivMeshT::UnivMeshT(TypeT T, WindingT W)
    : m_Type(T),
      m_Winding(W),
      m_IsInited(false),
      m_IsModified(true)
{
}


void MatSys::UnivMeshT::SetColor(unsigned int VertexNr, const Vector3fT& col)
{
    if (VertexNr >= m_Colors.Size())
        m_Colors.PushBackEmpty(VertexNr+1-m_Colors.Size());

    m_Colors[VertexNr]=col;
    m_IsModified=true;
}


void MatSys::UnivMeshT::Render()
{
    if (!m_IsInited)
    {
        if (m_Colors.Size() == m_Origins.Size())
        {
            // This mesh requires the "colors" channel!
        }

        m_IsInited=true;
    }

    // ... do the actual render work
}
Do you think this would work (with the GL3+ renderers)?
This would also allow us to not need the AttributeT enum.

Btw., I'd prefer to work with SVN patches -- at least when many small edits to existing files are concerned.
Creating them is really simple (do you know / can you use TortoiseSVN?), and they really help reducing the confusion of manually typed "in x find y, then change to z" instructions, because it's unambiguously clear to the receiving person how to understand them. They also help to keep related things together, rather scattering them in random bits of text that are really hard to reproduce locally. :cheesy:
Best regards,
Carsten
thomasfn
Posts:14
Joined:2011-11-07, 22:48

Re: Renderers and Meshes

Post by thomasfn » 2011-11-21, 12:36

I know the basics of using TortoiseSVN, like updating, committing, ignoring files, resolving conflicts etc. I'll look into creating a patch next time I have something to submit.

I think I must have forgot the TypeT member, or deemed it unimportant at the time. It's easy enough to add if needed, but really the only render modes that should be in use are triangle and triangle strip.

Yes I did think about changing VertexT to some kind of better method, maybe even using templates (so that the user code can decide what type of vertex to use). Having seperate arrays for each "channel" of data like you described would also increase performance slightly, as the array can just be transmitted straight to the GPU without modifications - whereas currently I have to sort the vertex array into several other arrays to transmit. I was just trying to make as few changes as possible, as you said, enough to support VBOs and such without a huge overhaul. But if you want to make a huge overhaul, that's fine too!

There still needs to be some way to let the renderer know which channels are in use and which aren't. I suppose you could just make the assumption that if the specific array is empty, don't include than channel. So if m_Origins has 10 elements and so does m_Colors, but every other array is empty, then only generate VBOs for origins and colours. That would probably work.

The Init method was there because the VBOs need to be setup after the attributes are known about . Same for the Update method, it needs to be called after the vertices are all added to the mesh. I guess they could be hidden and the mesh could be checked at render-time to see if it needs to be updated or not, but generally it's a good idea to try and keep init code (setting up VBOs and transmitting large quantities of vertex data to the GPU) seperate from the render code. I'm fairly sure the GPU won't be very happy when you start uploading meshes in the middle of the rendering pipeline. Or maybe it won't care, I'm not sure.
User avatar
Carsten
Site Admin
Posts:2170
Joined:2004-08-19, 13:46
Location:Germany
Contact:

Re: Renderers and Meshes

Post by Carsten » 2011-11-22, 23:32

thomasfn wrote:I know the basics of using TortoiseSVN, like updating, committing, ignoring files, resolving conflicts etc. I'll look into creating a patch next time I have something to submit.
:thx:
I think I must have forgot the TypeT member, or deemed it unimportant at the time. It's easy enough to add if needed, but really the only render modes that should be in use are triangle and triangle strip.
Sorry, but we need almost all of them: lines (in all their variants) are used in the editor and for wire-frame renderings, and even quads and quad-strips are useful and used in stencil shadows code. Triangle-fans are used as "better" polygons (e.g. when there is a danger that the polygon is not totally coplanar; I've seen drivers produce artifacts in such cases).
Yes I did think about changing VertexT to some kind of better method, maybe even using templates (so that the user code can decide what type of vertex to use). Having seperate arrays for each "channel" of data like you described would also increase performance slightly, as the array can just be transmitted straight to the GPU without modifications - whereas currently I have to sort the vertex array into several other arrays to transmit. I was just trying to make as few changes as possible, as you said, enough to support VBOs and such without a huge overhaul. But if you want to make a huge overhaul, that's fine too!
Well, yes, but I think the result is worthwhile. :cheesy:

If I follow all this correctly, the first/next step would be to replace the old MeshT class with the new one, right? Thus, if we can be reasonably sure that the new mesh class is the right thing for the future (until the big change where the renderer becomes responsible for all rendering), I'd not mind getting things right. (I'm on a constant mission to fix my coding sins from the past.) ;-)
There still needs to be some way to let the renderer know which channels are in use and which aren't. I suppose you could just make the assumption that if the specific array is empty, don't include than channel. So if m_Origins has 10 elements and so does m_Colors, but every other array is empty, then only generate VBOs for origins and colours. That would probably work.
Yes, this is what I had in mind. It's an implicit, but strong rule.
Do you see any problem with it?
The Init method was there because the VBOs need to be setup after the attributes are known about . Same for the Update method, it needs to be called after the vertices are all added to the mesh. I guess they could be hidden and the mesh could be checked at render-time to see if it needs to be updated or not, but generally it's a good idea to try and keep init code (setting up VBOs and transmitting large quantities of vertex data to the GPU) seperate from the render code. I'm fairly sure the GPU won't be very happy when you start uploading meshes in the middle of the rendering pipeline. Or maybe it won't care, I'm not sure.
Sure, for this reason the Cafu Engine is doing some off-screen rendering for "pre-caching" purposes already: The GPU won't mind doing things in any order, but some kinds of first use of resources will happily take their time, causing noticeable lag without pre-caching.
However, this should / need not affect the design of the mesh class. Instead, consider how the interface should look from the users perspective, where our main user code is the one that creates the meshes, but the renderers are kind of users as well (their main interface is probably in the derived classes).
Best regards,
Carsten
thomasfn
Posts:14
Joined:2011-11-07, 22:48

Re: Renderers and Meshes

Post by thomasfn » 2011-12-07, 00:56

Sorry for the (very) late response. Again, very busy!
Sorry, but we need almost all of them: lines (in all their variants) are used in the editor and for wire-frame renderings, and even quads and quad-strips are useful and used in stencil shadows code. Triangle-fans are used as "better" polygons (e.g. when there is a danger that the polygon is not totally coplanar; I've seen drivers produce artifacts in such cases).
Adding a TypeT member should be ok, there is an enum for it when drawing the mesh in GL3.x, though I don't know the deprecation status of it - it should be ok.
Well, yes, but I think the result is worthwhile.

If I follow all this correctly, the first/next step would be to replace the old MeshT class with the new one, right? Thus, if we can be reasonably sure that the new mesh class is the right thing for the future (until the big change where the renderer becomes responsible for all rendering), I'd not mind getting things right. (I'm on a constant mission to fix my coding sins from the past.)
A new mesh class is definitely essential, defining essential as the best route to implementing a modern OpenGL renderer (it will also help organise and simplify existing renderers too).
Sure, for this reason the Cafu Engine is doing some off-screen rendering for "pre-caching" purposes already: The GPU won't mind doing things in any order, but some kinds of first use of resources will happily take their time, causing noticeable lag without pre-caching.
However, this should / need not affect the design of the mesh class. Instead, consider how the interface should look from the users perspective, where our main user code is the one that creates the meshes, but the renderers are kind of users as well (their main interface is probably in the derived classes).
I am unfamiliar with how Cafu does caching, I did notice some kind of precache method in the renderer but my attempts to debug it revealed it never got called. Uploading the VBOs of a single mesh isn't too expensive, so if it all happens during the render stage of the first frame, I guess it'll be fine. I recently made an application using OpenGL which transmits up to 8 fairly large meshes consisting of about 4 VBOs each per frame to the GPU, it cause no noticeable framerate drop while transmitting.


You probably understand the requirements now that you can do this yourself, but should I mockup another UnivMeshT class?
User avatar
Carsten
Site Admin
Posts:2170
Joined:2004-08-19, 13:46
Location:Germany
Contact:

Re: Renderers and Meshes

Post by Carsten » 2011-12-12, 22:07

Hi Thomas,

argh, sorry for the late reply, with Christmas and the New Year approaching, work starts piling up here as well! :builder:
(it will also help organise and simplify existing renderers too)
Do you have anything specific in mind?
You probably understand the requirements now that you can do this yourself, but should I mockup another UnivMeshT class?
Well, I can of course come up with a UnivMeshT class myself (I'll try to asap), but it would not be very efficient if I then change all existing Cafu code in one big step while you wait for me to finish the work.

Instead, I'll probably migrate one well-defined portion of the code first, for example the model rendering and the Model Editor. Then you could continue with the development of the OpenGL 3 renderer while I continue to migrate the rest of the code; such that overall, we can work in parallel and independently of each other.

Does this sound ok?
Best regards,
Carsten
thomasfn
Posts:14
Joined:2011-11-07, 22:48

Re: Renderers and Meshes

Post by thomasfn » 2011-12-12, 23:55

Sounds like a good idea to me. I've got plenty of things to be getting on with which isn't todo with Cafu though, so take as much time as you need. It's almost Christmas after all!
Post Reply

Who is online

Users browsing this forum: No registered users and 11 guests