[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [E-devel] Edje grad objs

> I would make the geometric 'type' its own block tag though
> > (you could also call it 'geometry' instead of 'type' if you like),
> > of the form say,
> > 
> > type {
> >    name: "type_name";
> >    params: "whatever";
> > }
> > 
> to keep things from getting too deep, how abouting
>  part {
>    type: GRADIENT;
>    ...
>    description {
>      ...
>      gradient {
>        type: "type_name";
>        params: "param_name";
>        spectrum: "spectrum name";
>      }
>      fill {
>        angle: 90;
>        spread: 1; // will this take 2 params for images (x and y)?
>      }
>    }
>  }

	Looks fine :) And no, there would only be one spread value
for images even though ideally there should be two, as you mention.
It's just not worth it to add support for separate x and y tile modes.

	Just for completeness:

	There is one other attribute that evas grads support,
namely an 'offset' for the spectrum. This is an arbitrary float
value which moves the index to the spectrum prior to applying
the spread restriction.. Varying this in effect makes it as if
the spectrum were moving (all else staying the same).

	I should also list all the params that are supported by
each type, and what they mean....  maybe another time.. :)

> also maybe the spectra definitions should be
> spectra {
>   spectrum {
>     name: "foo";
>     color: ...
>   }
>   spectrum: "file.ext";
> }
> instead of gradients { gradient { name: ... } }
> (to be a little more rigorous with our language)
	I thought that's what we had already.. except that yes,
'spectra' is better than 'spectrums' :)

> > > for images, you can accomplish this by fading one out as the
> > > other fades in.
> > > 
> > 
> > Well, that would seem to be your solution right there -- 
> > that's how you should do it in edje for grads as well.   :) 
> yeah. i actually started to mentioned that in the last email,
> and then decided against it for some reason. doing two grad parts
> that fade in / out *should* look the same as manually recalculating
> stops, and manually blending.
> i guess it comes down to how we want the edc 'API' to be.
> it would be _nice_ to support a single gradient part that has two
> states with different spectra that transition. but this has the
> limitations discussed (mostly that the gradient type must be the
> same). so, definitely less work to just do
> p3.spectrum = (pos > 0.5) ? p2.spectrum : p1.spectrum;
> (e.g. hard switch of spectrum when the transition hits its
> 'half-way' point)
> and require two parts for blending spectra. But, not as nice for
> edje users.
	Ummmm... You could fade in/out between two different grad
geometry types.. it's just that it won't really look like a smooth
*deformation* of one geometry to another.
	Similarly for states with different spread modes.. you could
do the fading in/out between the two, it just won't look like you're
smoothly deforming repeat to reflect say, because there is no such
smooth deformation between those two.

	I'm mot sure that I'm following you here...??

> As for images, we have also have 'tween' transitions (frame based
> animation), so fading between states isn't supported.
	You could have frame-by-frame 'tween' transitions within a
state, as with images.. just tween thru spectra.

> For now, i'll leave it simple for gradients. If we later want to
> support blending between gradient states we can.
> One last thing. For .png (or whatever image format) spectra, a
> black to white spectrum just be: [black, white] (2 pixels), right?
> So, wouldnt' they just be the subset of 'stop' based spectra that
> have equidistant stops? (which all spectra can be represented as).
> In other words, they'd all get converted into the same format
> internally, right?
	Internally all will actually get 'converted' to a span of
pixels! It's getting there from whatever the initial description --
that's the hard part.. When we load from a png image say, we could
consider it as a set of equidistant stops and do whatever we would
do with such in general.. or we can just do image-span scaling,
which is pretty much linear interpolation.. So that's what we do,
for speed and for simplicity.
	The main reason for allowing image files as sources (and it
really should be a wx1 image) is just that they have so much support,
there are so many tools for making images, etc.. You can get really
interesting spectra from just taking a 'slice' from most any image
you have around!