[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [E-devel] Putting GL-textures into Etk, Evas, bla bla... How to ch oke a horse with words.



>
> 	Basically, what one wants here is a means to do one's own
> 'rendering' in a way that is 'fitted' to some given evas engine.
>
> 	The current argb32 data set/get mechanism for image objs
> allows one to do this in a general way that works for all engines,
> you can use your own routines, or a lib like cairo, or whatever..
> but it requires that you render to an argb32 memory buffer.
>
> 	This is not very efficient compared to say using xrender
> to draw to a pict, or gl to draw to a gl texture or pixbuffer, etc.
>
> 	One way to achieve this is to have engine specific "image"
> objects. These would be declared in a header file much as is done
> for the engine "info"... possibly in these very headers themselves.
>
> 	These engine specific image objs would then allow one to
> set/get engine specific 'buffers' rather than argb32 mem buffers,
> eg. xrender picts, gl textures or pixbuffers, etc. It would probably
> not be that difficult to implement either.
>
> 	Any thoughts.... ?

:-)

I like it also (as an alternative to Simons way). Originally I suggested
that we should exploit the fact that x11-gl implementations' Image object
already was a masqued GL texture. I disapproved of my own suggestion
because it was really GL-backend specific, and a poor way of generalizing
something that is inheritively specific.
With your idea, we at least gain a certain level of "ok, it is specific,
but not just GL specific. Xrender wins too", which is at least a
consolation.

Let's face it: You can't generalize this thing in any satisfactory way
which is also efficient.

Time for the people in charge to tell what they think.

For those that needs a quick recap of our possibilities so far:
N.B.: Don't mind me inventing function names, they shouldn't be taken too
serious.
-----------------------------------------

Decide whether to paint directly on screen or on something off-screen

1) On-screen.

    1.1) The user is handed control to paint when the time is due
         (the solution that Simon has helped with). Can also be done
         by adding another callback to evas:
         EVAS_CALLBACK_X11GL_USERPAINT

         PRO: Fastest way possible.
              No hazzle dealing with GLX contexts.
         CON: The patch on Evas made it expose its internal repaint-
                scheme, so the user can issue GL commands at the right
                time (after painting lower things, before painting
                higher things).
              Engine specific

    1.2) The user gives Evas a "macro-object" based on GL. This
         solution is virtually impossible. It would require E to
         parse and understand GL-like commands.

2) Off-screen.

   2.1) Using an image to transfer raw pixel data

        PRO: Seen from E, the pixels are being treated as pure argb32.
      	     Not engine specifc. User can choose any backend he wants.
        CON: We'll see about the cost of speed.

   2.2) Using an engine-specific Image that is handled by Evas' engine
        completely, like Jose suggests.
        Would probably entail a couple of new functions:
        evas_x11gl_image_new
        evas_x11gl_image_begin_paint  // a hidden glXMakeCurrent
        evas_x11gl_image_end_paint    // make backbuffer current again

        PRO: Evas still has full control over when things should be
               painted, cached, doublebuffered etc.
             Probably still very fast. RTT (Render To Texture) is
               very popular and well supported on the big gfx-cards.

        CON: One extra copy of graphics has to be done.

   2.3) The user makes his own GLX-context and buffers, but makes sure
        Evas can read from those also. Not really relevant by now,
        but may be worth exploiting if people are unhappy with the
        other solutions.

---------------------

Tomorrow I will try the super-clean approach, which is to render each
frame to a PBuffer, and extract the pixels from video to system memory and
then get them into Evas again. This is the worst case scenario if all else
is unacceptable, so it should be interesting to know the achievable
framerate.

[Small question to developers: Why does the Evas backend create its own
GLX context itself? In all other examples it is up to the user to create
the necessary system resources. Even in x11-gl, it is the user who create
the X-Window.]


Regards
Rene