[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [E-devel] Edje grad objs

> > 
> > 	First, I would put the 'angle' and 'spread' properties
> > under the general 'fill' tag. This is because that's really
> > where they belong logically (describes various aspects of the
> > way the filling of the object region is done), and also because
> > image objects are going to get these very same properties as
> > well (ie. image objects will get an angle and spread mode).
> Easy enough to do.

> How about 
> spectrums {
>   spectrum {
>     name: "white_to_black";
>     color: 255 255 255 255 0;
>     color: 0 0 0 255 1;
>   }
> }

> (later we can have spectrum: "foo.ext"; for loading 'external'
> spectrum definitions also)
	That would be good. :)

> I'm not sure I quite understand the definition of 'spread'.
> If set to 1, you get a 'circular' domain (0..1,0..1,0..1...)
> and if set to 0 you get a mirrored domain (0..1..0..1..0..1...).
> So, why is this called 'spread'? For images I assume this param
> would reflect the image over each tile boundary?
	Well, the name 'spread' I adopted from a very early version
of Cairo.. that's what I recall it was using then and I thought it
reasonable to have the same name as what it had.
	It really could be named 'tile', or whatever.. I think that
Cairo is calling it something else right now but I don't recall
offhand what that is.. maybe 'extend'?

	Spread seemed reasonable to me then, as reasonable as tile
or some others.. so that's what I kept. For radial or rectangular
gradient geometries, it gives a reasonable name as then it describes
how the gradient 'spreads out' from the fill origin.. and for linear
geometries it's fairly reasonable as well.

	For images, the spread modes of reflect and repeat will make
the image reflect or repeat in both x and y directions, ie. repeat
is what is now done by default by images, and reflect will do:

	For a 2x2 image given by {a b c d}, you'll get for example
for a fill size also of 2x2, over an 8x4 region with fill origin
at its top left

  a b | b a | a b | b a
  c d | d c | c d | d c
  ---   ---   ---   ---
  c d | d c | c d | d c
  a b | b a | a b | b a

	Restrict will restrict the image to not tile at all, just
one copy is rendered.
	Pad will do the same actually..
	The others, restrict_reflect, restrict_repeat, only have
meaning for gradients which have had their spectrum 'offset' by some
amount.. For images these also will be the same as restrict.

	Feel free to replace 'spread' with whatever seems best
to you.. tile, extend, whatnot...

	Let me also mention that all of the geometric attributes
that are used to determine how to render a given 2-dim region
(fill, angle, spread, ...) can be abstractly thought of as defining
a mapping  g:CoordSpace --> [0,1]
	It's the functional composition of this 'geometry' map
with the spectrum map that gives us a mapping,
	CoordSpace --> ColorSpace
allowing us to determine the color of a given (x,y) point.

> Also, fill currently (at least with linear grads) is applied
> before rotation. So, if you want a horizontal grad that fills
> the width of your part, the fill would need to be (fillw, fillh)
> = (h, w). A bit odd from a users perspective. How does/should
> this work for arbitrary angles?
	It always takes the fill region as being specified in
un-transformed coords, the geometry is laid out according to the
type and spread mode, and then the rotation is applied around the
fill origin.
	This will also be the same for images.. first the image
will be scaled to the fill size, laid out acording to the spread
mode, and then the result rotated around the specified fill origin
by whatever angle.

	Linear gradients in evas (at angle = 0) have the spectrum
increasing downward, ie. along increasing y.
	This is what evas' documentation and behavior was before,
so that's what I kept.

	We could break this semantics and make linear grads at
angle = 0 have increasing spectrum towards the right, ie. along
increasing x. That would be better if most uses of linear grads
are horizontal rather than vertical.

> > 
> > 	Given all this, the grad obj description would then follow
> > the same pattern as img objs, ie. the gradient tag would then give
> > reference to a spectrum source
> > 
> > gradient {
> >    normal: "spectrum_source";
> > }
> sounds good. what other fields would go inside the gradient block?
> ('type' (radial / linear / sinusoidal / etc ?)

	Logically speaking, the 'type' would really belong as part
of the fill, since it describes the kind of geometry used to map
the region.. and one could actually do this with images as well,
though I doubt that it's worth doing.. Feel free to decide what
would be simpler -- placing it in the fill block or in the gradient
block, as you see best.

	I would make the geometric 'type' its own block tag though
(you could also call it 'geometry' instead of 'type' if you like),
of the form say,

type {
   name: "type_name";
   params: "whatever";

	This would be closest to what the grad api has.

	But for now it may be easiest to simply omit params and
only have the type name specified: "linear", "radial", "angular"
(also called 'conical' by some), "rectangular", and "sinusoidal"
are all there are for now.
	Just this morning I thought I might add a new type, called
"masked", which would use an alpha mask to create the gradient's
geometric pattern, as raster mentioned.. This type would have
one parameter -- with value the filename that would be the source
of the mask.. It uses the mask alpha values to index the spectrum,
thus limiting the total number of colors obtainable to the depth
of the mask (256 values for 8-bit masks).. This type would then
definitely need a param specified, as no params would likely be
set to do nothing... We'll see :)

> > 	Now we come to what you are considering: How to interpolate
> > between two grad descriptions.
> > 	Well, I would start about not thinking in terms of sets of
> > color-stops and instead think in terms of the spectrums..
> > 
> > 	If you have two spectrums s0,s1:[0,1]-->ColorSpace,
> > it's easy to see what iterpolating between s0 and s1 means.
> > For t in [0,1], we have a new spectrum s[t] defined by:
> > 	s[t] = (1-t)*s0 + t*s1;
> > ie. this means that for any f in [0,1], the value of s[t], at f,
> > is given by
> > 	s[t](f) = (1-t)*s0(f) + t*s1(f);
> yes, this is what i was planning. for gradients 'inlined' and
> described as a set of color stops, i can think of how to do it:
> for each of the two gradients, g0 and g1, find the length (sum of
> 'distances' of all stops), and then convert stops to a coord between
> 0 and 1.
> e.g. the following grad:
> d color  (d is distance from previous stop)
> -------
> 0 blue
> 1 red
> 2 green
> 1 white
> would have a length of 4 and the follow coordinate descriptiong
> x color
> ------
> 0   blue
> .25 red
> .75 green
> 1   white
> we do this for both grads, then find the union of the set of x
> coords, interpolate the missing color values for each grad (e.g.
> find g0[x] for x's that g1 defines but g0 doesn't).
> now we have two gradients with the same # of stops at the same
> distances, so we can just blend the values of g0 and g1 at each
> stop (by how far along the transition we are) and we have our
> 'current' gradient.
> for spectra given as pixel arrays, we'd just need to smoothly scale
> the smaller to the size of the larger and blend them as above.
> (in essence though, such spectra are defined as equidistant stops
> of the pixel colors, right? so, we could just use the above
> algorithm...
	Well, that's exactly right.. However, I'm not sure edje
should be the place to do it... But, see below.

> > 
> > 	It's then a matter of converting this "abstract" version
> > to what is needed in practice..
> > 
> > 	However, doing this would require support for it in evas..
> > Otherwise it would be somewhat difficult (if the spectra are
> > inlined but have differing num of stops or distances as has been
> > mentioned), or very difficult (if the spectra are defined via png
> > or ggr files).
> > 	I hadn't really thought about doing this though...
> > But it's certainly something worth considering, and possibly
> > for image objs as well.
> for images, you can accomplish this by fading one out as the other
> fades in.

	That could have some limitations for rendering ops that aren't
'additive', but that's ok.
	Well, that would seem to be your solution right there -- 
that's how you should do it in edje for grads as well.  :)