If clutter_actor_allocate finds it necessary to update an actors
allocation then it now also queue a redraw of that actor. Currently we
queue redraws for actors very early on when queuing a relayout instead
of waiting to determine the final outcome of relayouting to determine if
a redraw is really required. With this in place we can move away from
preemptive queuing of redraws.
clutter_actor_queue_relayout currently queues a relayout and a redraw,
but the plan is to change it to only queue a relayout and honour the
documentation by assuming that the process of relayouting will
result queuing redraws for any actors whos allocation changes.
This doesn't make that change it just adds an internal
_clutter_actor_queue_only_relayout function which
clutter_actor_queue_relayout now uses as well as calling
clutter_actor_queue_redraw.
There is an internal _clutter_actor_queue_redraw_with_clip API that gets
used for texture-from-pixmap to minimize what we redraw in response to
Damage events. It was previously working in terms of a ClutterActorBox
but it has now been changed so an actor can queue a redraw of volume
instead.
The plan is that clutter_actor_queue_redraw will start to transparently
use _clutter_actor_queue_redraw_with_clip when it can determine a paint
volume for the actor.
This adds a debug option to visualize the paint volumes of all actors.
When CLUTTER_PAINT=paint-volumes is exported in the environment before
running a Clutter application then all actors will have their bounding
volume drawn in green with a label corresponding to the actors type.
This is a fairly extensive second pass at exposing paint volumes for
actors.
The API has changed to allow clutter_actor_get_paint_volume to fail
since there are times - such as when an actor isn't a descendent of the
stage - when the volume can't be determined. Another example is when
something has connected to the "paint" signal of the actor and we simply
have no way of knowing what might be drawn in that handler.
The API has also be changed to return a const ClutterPaintVolume pointer
(transfer none) so we can avoid having to dynamically allocate the
volumes in the most common/performance critical code paths. Profiling was
showing the slice allocation of volumes taking about 1% of an apps time,
for some fairly basic tests. Most volumes can now simply be allocated on
the stack; for clutter_actor_get_paint_volume we return a pointer to
&priv->paint_volume and if we need a more dynamic allocation there is
now a _clutter_stage_paint_volume_stack_allocate() mechanism which lets
us allocate data which expires at the start of the next frame.
The API has been extended to make it easier to implement
get_paint_volume for containers by using
clutter_actor_get_transformed_paint_volume and
clutter_paint_volume_union. The first allows you to query the paint
volume of a child but transformed into parent actor coordinates. The
second lets you combine volumes together so you can union all the
volumes for a container's children and report that as the container's
own volume.
The representation of paint volumes has been updated to consider that
2D actors are the most common.
The effect apis, clutter-texture and clutter-group have been update
accordingly.
An Effect implementation might override the paint volume of the actor to
which it is applied to. The get_paint_volume() virtual function should
be added to the Effect class vtable so that any effect can get the
current paint volume and update it.
The clutter_actor_get_paint_volume() function becomes context aware, and
does the right thing if called from within a ClutterEffect pre_paint()
or post_paint() implementation, by allowing all effects in the chain up
to the caller to modify the paint volume.
An actor has an implicit "paint volume", that is the volume in 3D space
occupied when painting itself.
The paint volume is defined as a cuboid with the origin placed at the
top-left corner of the actor; the size of the cuboid is given by three
vectors: width, height and depth.
ClutterActor provides API to convert the paint volume into a 2D box in
screen coordinates, to compute the on-screen area that an actor will
occupy when painted.
Actors can override the default implementation of the get_paint_volume()
virtual function to provide a different volume.
This reorganizes the loop for clutter_actor_contains so that it is a
for loop rather than a while loop. Although this is mostly just
nitpicking, I think this change could make the loop slightly faster if
not optimized because it doesn't perform the self == descendant check
twice and it is clearer.
The documentation for clutter_actor_contains didn't specify what
happens when self==descendant. A strict reading of it might lead you
to think that it would return FALSE because in that case the
descendant isn't an immediate child or a deeper descendant. The code
actually would return TRUE. I think this is more useful so this patch
fixes the docs rather than the code.
This adds a check in clutter_actor_real_queue_redraw after calling
_clutter_actor_get_stage_internal to check in case the actor doesn't yet
have an associated stage so we can avoid passing a NULL stage pointer to
_clutter_stage_set_pick_buffer_valid which could cause a crash.
We have an optimization to track when there are multiple picks per
frame so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There was a problem though in that we were tracking this information in
the ClutterMainContext, but conceptually this doesn't really make sense
because the pick buffer is associated with a stage framebuffer and there
can be multiple stages for one context.
This patch moves the state tracking to ClutterStage.
This reverts commit d7e86e2696.
This was a half baked patch that was pushed a bit early since it broke
test-texture-pick-with-alpha + the commit message refers to a change on
the wip/paint-box branch that hasn't happened yet.
We have an optimization to track when there are multiple picks per
frames so we can do a full render of the pick buffer to reduce the
number of pick renders for a static scene.
There were two problems with how we were tracking this state though.
Firstly we were tracking this information in the ClutterMainContext, but
conceptually this doesn't really make sense because the pick buffer is
associated with a stage framebuffer and there can be multiple stages for
one context. Secondly - since the change to how redraws are queued - we
weren't marking the pick buffer as invalid when a queuing a redraw, we
were only marking the buffer invalid when signaling/finishing the
queue-redraw process, which is now deferred until just before a paint.
This meant using clutter_stage_get_actor_at_pos after a scenegraph
change could give a wrong result if it just read from an existing (but
technically invalid) pick buffer.
This patch moves the state tracking to ClutterStage, and ensures the
buffer is invalidated in _clutter_stage_queue_actor_redraw.
http://bugzilla.clutter-project.org/show_bug.cgi?id=2283
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The Constraint should plug directly into the allocation mechanism, and
modify the allocation of the actor to which they are applied to. This is
similar to the mechanism used by the Effect class to modify the paint
sequence of an actor.
This adds a verbose warning that will be displayed if
clutter_actor_allocate is passed an actor that isn't a descendent of a
ClutterStage. Layouting should always bubble up from a stage so this
condition is likely to indicate a buggy container that allocating a
child that it has already unparented.
When building actor relative transforms, instead of using the matrix
stack to combine transformations and making assumptions about what is
currently on the stack we now just explicitly initialize an identity
matrix and apply transforms to that.
This removes the full_vertex_t typedef for internal transformation code
and we just use ClutterVertex.
ClutterStage now implements apply_transform like any other actor now
and the code we had in _cogl_setup_viewport has been moved to the
stage's apply_transform instead.
ClutterStage now tracks an explicit projection matrix and viewport
geometry. The projection matrix is derived from the perspective whenever
that changes, and the viewport is updated when the stage gets a new
allocation. The SYNC_MATRICES mechanism has been removed in favour of
_clutter_stage_dirty_viewport/projection() APIs that get used when
switching between multiple stages to ensure cogl has the latest
information about the onscreen framebuffer.
This adds _clutter_actor_get_stage_internal to clutter-private.h since
we plan to use it in clutter-offscreen-effect when preparing to
redirect an actor offscreen.
Comprehensively add (out) annotations to functions parameters
returning int/float/double.
Not handled here: structure out returns like ClutterColor or
ClutterPerspective or GValue that should get (out caller-allocates).
Not handled here: Cogl
http://bugzilla.clutter-project.org/show_bug.cgi?id=2302
This clarifies the documentation for clutter_actor_queue_redraw to
explain that custom actors should call this whenever some private state
changes that affects painting *or* picking.
The idea is that if we see multiple picks per frame then that implies
the visible scene has become static. In this case we can promote the
next pick render to be unclipped so we have valid pick values for the
entire stage. Now we can continue to read from this cached buffer until
the stage contents do visibly change.
Thanks to Luca Bruno on #clutter for this idea!
* wip/table-layout:
Add ClutterTableLayout, a layout showing children in rows and columns
box-layout: Use allocate_align_fill()
bin-layout: Migrate to allocate_align_fill()
actor: Add allocate_align_fill()
test-flow-layout: Use BindConstraints
Layout managers are using the same code to allocate a child while taking
into consideration:
• horizontal and vertical alignment
• horizontal and vertical fill
• the preferred minimum and natural size, depending
on the :request-mode property
• the text direction for the horizontal alignment
• an offset given by the fixed position properties
Given the amount of code involved, and the amount of details that can go
horribly wrong while copy and pasting such code in various classes - let
alone various projects - Clutter should provide an allocate() variant
that does the right thing in the right way. This way, we have a single
point of failure.
This adds a wrapper macro to clutter-private that will use
g_object_notify_by_pspec if it's compiled against a version of GLib
that is sufficiently new. Otherwise it will notify by the property
name as before by extracting the name from the pspec. The objects can
then store a static array of GParamSpecs and notify using those as
suggested in the documentation for g_object_notify_by_pspec.
Note that the name of the variable used for storing the array of
GParamSpecs is obj_props instead of properties as used in the
documentation because some places in Clutter uses 'properties' as the
name of a local variable.
Mose of the classes in Clutter have been converted using the script in
the bug report. Some classes have not been modified even though the
script picked them up as described here:
json-generator:
We probably don't want to modify the internal copy of JSON
behaviour-depth:
rectangle:
score:
stage-manager:
These aren't using the separate GParamSpec* variable style.
blur-effect:
win32/device-manager:
Don't actually define any properties even though it has the enum.
box-layout:
flow-layout:
Have some per-child properties that don't work automatically with
the script.
clutter-model:
The script gets confused with ClutterModelIter
stage:
Script gets confused because PROP_USER_RESIZE doesn't match
"user-resizable"
test-layout:
Don't really want to modify the tests
http://bugzilla.clutter-project.org/show_bug.cgi?id=2150
The Animatable interface was created specifically for the Animation
class. It turns out that it might be fairly useful to others - such as
ClutterAnimator and ClutterState.
The newly-added API in this cycle for querying and accessing custom
properties should not require that we pass a ClutterAnimation to the
implementations: the Animatable itself should be enough.
This is necessary to allow language bindings to wrap
clutter_actor_animate() correctly and do type validation and
demarshalling between native values and GValues; an Animation instance
is not available until the animate() call returns, and validation must
be performed before that happens.
There is nothing we can do about the animate_property() virtual
function - but in that case we might want to be able to access the
animation from an Animatable implementation to get the Interval for
the property, just like ClutterActor does in order to animate
ClutterActorMeta objects.
It's possible - though not recommended - that user code causes the
destruction of an actor in one of the notification handlers for
flag-based properties. We should protect the multiple notification
emission with g_object_ref/unref.
Up until now, the "behaviours" member of an actor definition was parsed
by the ClutterScript parser itself - even though it's not strictly
necessary.
In an effort to minimize the ad hoc code in the Script parser, we should
let ClutterActor handle all the special cases that involve
actor-specific members.
The scanner has some issues when parsing valid gtk-doc annotations; we
should make its (and, in return, ours) life easier.
We still get warnings for code declared in <programlisting> sections,
unfortunately.
ClutterActor should allow attaching actions, constraints and effects
just like it allows behaviours, e.g.:
{
...
"constraints" : [
{
"type" : "ClutterAlignConstraint",
"source" : "stage",
"align-axis" : "x-axis",
"factor" : 0.5
},
{
"type" : "ClutterAlignConstraint",
"source" : "stage",
"align-axis" : "y-axis",
"factor" : 0.5
}
],
...
}
or:
{
...
"actions" : [
{
"type" : "ClutterDragAction",
"signals" : [
{ "name" : "drag-end", "handler" : "on_drag_end" }
]
}
],
...
}
In order to do so, we use the Scriptable interface implementation and
add three new custom properties accepting an array; then we parse each
member of the array as a new object.
The marshallers we use for the signals are declared in a private header,
and it stands to reason that they should also be hidden in the shared
object by using the common '_' prefix. We are also using some direct
g_cclosure_marshal_* symbol from GLib, instead of consistently use the
clutter_marshal_* symbol.
It is often useful to determine if one actor is an ancestor of
another. Add a method to do that.
http://bugzilla.openedhand.com/show_bug.cgi?id=2162
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Since ClutterEffect is an ActorMeta it should be possible to animate the
properties of named effects using the @effects syntax, just like it
happens for actions and constraints.
ClutterEffect is an abstract class that should be used to apply effects
on generic actors.
The ClutterEffect class just defines what an effect should implement; it
could be defined as an interface, but we might want to add some default
behavior dependent on the internal state at a later point.
The effect API applies to any actor, so we need to provide a way to
assign an effect to an actor, and let ClutterActor call the Effect
methods during the paint sequence.
Once an effect is attached to an actor we will perform the paint in this
order:
• Effect::pre_paint()
• Actor::paint signal emission
• Effect::post_paint()
Since an effect might collide with the Shader class, we either allow a
shader or an effect for the time being.
When getting the relative modelview matrix we need to reset it to the
stage's initial state or, at least, initialize it to the identity
matrix, instead of assuming we have an empty stack.
While this is totally fine (0 in the pointer context will be converted
in the right internal NULL representation, which could be a value with
some bits to 1), I believe it's clearer to use NULL in the pointer
context.
It seems that, in most case, it's more an overlook than a deliberate
choice to use FALSE/0 as NULL, eg. copying a _COGL_GET_CONTEXT (ctx, 0)
or a g_return_val_if_fail (cond, 0) from a function returning a
gboolean.
New virtual functions cannot go wherever they want, if we need to
preserve the ABI.
Also, the coding style should match the rest of ClutterActor and
Clutter's own coding style.
Added the implementation for clutter_actor_get_accessible, virtual
ClutterActor function, used to obtain the accessible object of
any ClutterActor.
As it is defined virtual, it would be possible to redefine it, so
any custom clutter actor could implement their accessibility object,
withouth relying totally on a accessibility implementation module.
See gtkiconview as example.
http://bugzilla.openedhand.com/show_bug.cgi?id=2070
The ClutterActor API should have modifier methods for adding, removing
and retrieving Actions and Constraints using the ClutterActorMeta:name
property - mostly, for convenience.
By implementing the newly added support for custom animatable
properties, we can allow addressing action and constraint properties
from ClutterAnimation and clutter_actor_animate().
The Constraint base, abstract class should be used to implement Actor
modifiers that affect the way an actor is sized or positioned inside a
fixed layout manager.
ClutterAction is an abstract class that should be used as the ancestor
for objects that change how an actor behaves when dealing with events
coming from user input.
Whenever we are warning inside ClutterActor we prefer the actor's name
to its type, if the name is set. The current code is made less readable
by the use of the ternary operator:
priv->name != NULL ? priv->name : G_OBJECT_TYPE_NAME (self)
This looks like a job for a simple convenience function.
For internal use we should have a get_stage_internal() variant that
avoids type checks and calls to public functions. The implementation
is trivial enough, and it will avoid (scene graph depth + 1) type
checks and (scene graph depth) function calls.
In 125bded81 some comments were introduced to ClutterTexture
complaining that it can have a Cogl texture before being
realized. Clutter always assumes that the single GL context is current
so there is no need to wait until the actor is realized before setting
a texture. This patch replaces the comments with clarification that
this should not be a problem.
The patch also changes the documentation about the realized state in
various places to clarify that it is acceptable to create any Cogl
resources before the actor is realized.
http://bugzilla.openedhand.com/show_bug.cgi?id=2075
The Actor's long description is a bit cluttered; it contains a section
on the actor's box semantics, on the transformation order and on the
event handling.
We should use <refsect2> tags to divide the Actor's description into
logically separated sections.
We should also add a section about the custom Scriptable properties that
ClutterActor defines, and the special handling of unit-based properties.
When emitting signals, one can mark arguments as being "static", ie an
indication this argument will not change during the signal emission.
This allows the signal marshalling code to create static GValues, in
this case not to copy the Color.
http://bugzilla.openedhand.com/show_bug.cgi?id=2073
We decide whether the paint() should be a real paint or a paint in pick
mode depending on the global pick_mode value. Using G_UNLIKELY() on an
operation that most likely is going to be executed once every frame is
going to blow a lot of cache lines and frak with the CPU branch
prediction. Not good.
Add clutter_actor_has_allocation(), a method meant to be used when
deciding whether to call clutter_actor_get_allocation_box() or any
of its wrappers.
The get_allocation_box() method will, in case the allocation is invalid,
perform a costly re-allocation cycle to ensure that the returned box
is valid. The has_allocation() method is meant to be used if we have an
actor calling get_allocation_box() from outside the place where the
allocation is always guaranteed to be valid.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Somebody somewhere decided it would be ok to define 'y1' as a global
function in math.h thus condemning us to repeatedly making commits to
fix these obnoxious compiler warnings about aliasing.
When printing out the property value during a ClutterScript debug run we
generate the value's content using g_strdup_value_contents() - though we
do it unconditionally. The contents might not be printed (they most
likely won't, actually) and will be freed afterwards. This is
unnecessary: we can allocate the contents string after checking if we're
going to print out the debug note, thus avoiding the whole
allocation/free cycle unless strictly needed.
* Add new clutter_geometry_union(), because writing union intersection
is harder than it looks. Fixes two problems with the inline code in
clutter_stage_glx_add_redraw_clip().
1) The ->x and ->y of were reassigned to before using them to
compute the new width and height.
2) since ClutterGeometry has unsigned width, x + width is unsigned,
and comparison goes wrong if either rectangle has a negative
x + width. (We fixed width for GdkRectangle to be signed for GTK+-2.0,
this is a potent source of bugs.)
* Use in clutter_stage_glx_add_redraw_clip()
* Account for the case where the incoming rectangle is empty, and don't
end up with the stage being entirely redrawn.
* Account for the case where the stage already has a degenerate
width and don't end up with redrawing only the new rectangle and not
the rest of the stage.
The better fix here for the second two problems is to stop using a 0
width to mean the entire stage, but this should work for now.
http://bugzilla.openedhand.com/show_bug.cgi?id=2040
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The documentation and name of the get_transformation_matrix function
implies that 'matrix' is purely an out parameter. However it wasn't
initializing the matrix before calling the 'apply_transform' virtual
so it was basically just a wrapper for the virtual. The virtual
assumes the matrix parameter is in/out and applies the actor's
transformation on top of any existing transformations. This causes
unexpected semantics that are inconsistent with the documentation.
A new (internal only currently) API, _clutter_actor_queue_clipped_redraw
can be used to queue a redraw along with a clip rectangle in actor
coordinates. This clip rectangle propagates up to the stage and clutter
backend which may optionally use the information to optimize stage
redraws. The GLX backend in particular may scissor the next redraw to
the clip rectangle and use GLX_MESA_copy_sub_buffer to present the stage
subregion.
The intention is that any actors that can naturally determine the bounds
of updates should queue clipped redraws to reduce the cost of updating
small regions of the screen.
Notes:
» If GLX_MESA_copy_sub_buffer isn't available then the GLX backend
ignores any clip rectangles.
» queuing multiple clipped redraws will result in the bounding box of
each clip rectangle being used.
» If a clipped redraw has a height > 300 pixels then it's promoted into
a full stage redraw, so that the GPU doesn't end up blocking too long
waiting for the vsync to reach the optimal position to avoid tearing.
» Note: no empirical data was used to come up with this threshold so
we may need to tune this.
» Currently only ClutterX11TexturePixmap makes use of this new API. This
is done via a new "queue-damage-redraw" signal that is emitted when
the pixmap is updated. The default handler queues a clipped redraw
with the assumption that the pixmap is being painted as a rectangle
covering the actors transformed allocation. If you subclass
ClutterX11TexturePixmap and change how it's painted you now also
need to override the signal handler and queue your own redraw.
Technically this is a semantic break, but it's assumed that no one
is currently doing this.
This still leaves a few unsolved issues with regards to optimizing sub
stage redraws that need to be addressed in further work so this can only
be considered a stepping stone a this point:
» Because we have no reliable way to determine if the painting of any
given actor is being modified any optimizations implemented using
_clutter_actor_queue_redraw_with_clip must be overridable by a
subclass, and technically must be opt-in for existing classes to avoid
a change in semantics. E.g. consider that a user connects to the paint
signal for ClutterTexture and paints a circle instead of a rectangle.
In this case any original logic to queue clipped redraws would be
incorrect.
» Currently only the implementation of an actor has enough information
with which to queue clipped redraws. E.g. It is not possible for
generic code in clutter-actor.c to queue a clipped redraw when hiding
an actor because actors have no way to report a "paint box". (remember
actors can draw outside their allocation and actors with depth may
also be projected outside of their allocation)
» The current plan is to add a actor_class->get_paint_cuboid()
virtual so actors can report a bounding cube for everything they
would draw in their current state and use that to queue clipped
redraws against the stage by projecting the paint cube into stage
coordinates.
» Our heuristics for promoting clipped redraws into full redraws to
avoid blocking the GPU while we wait for the vsync need improving:
» vsync issues aren't relevant for redirected/composited applications
so they should use different heuristics. In this case we instead
need to trade off the cost of blitting when using glXCopySubBuffer
vs promoting to a full redraw and flipping instead.
* stage-min-size-rework:
docs: Update minimum size accessors
actor: Use the TOPLEVEL flag instead of a type check
[stage] Use min-width/height props for min size
Since using addresses that might change is something that finally
the FSF acknowledge as a plausible scenario (after changing address
twice), the license blurb in the source files should use the URI
for getting the license in case the library did not come with it.
Not that URIs cannot possibly change, but at least it's easier to
set up a redirection at the same place.
As a side note: this commit closes the oldes bug in Clutter's bug
report tool.
http://bugzilla.openedhand.com/show_bug.cgi?id=521
If the actor is an internal child of another actor then we should call
unparent() when destroying it, like clutter_actor_reparent() does;
otherwise we'll leak the actor, since the parent holds a reference to
it.
http://bugzilla.openedhand.com/show_bug.cgi?id=2009
Instead of shadowing these properties with different properties with the
same names on stage, actually use them. Behaviour should be identical,
except the minimum stage size can now be enforced by setting the
min-width/height properties as well as using the set_minimum_size
function.
Since the "internal" state is global, it will leak onto actors that you
didn't intend for it to, because it applies not just to the actors you
create, but also to any actors *they* create. Eg, if you have a dialog
box class, you might push/pop_internal around creating its buttons, so
that those buttons get marked as internal to the dialog box. But
ctx->internal_child will still be set during the *button*'s constructor
as well, and so, eg, the label and icon inside the button actor will
*also* be marked as internal children, even if that isn't what the
button class wanted.
The least intrusive change at this point is to make push_internal() and
pop_internal() two methods of the Actor class, and take a ClutterActor
pointer as the argument - thus moving the locality of the internal_child
counter to the Actor itself.
http://bugzilla.openedhand.com/show_bug.cgi?id=1990
Since get_paint_opacity() recurses through the hierarchy it might lead
to a lot of type checks while we walk the parent-child chain. We can
split the recursive function from the public entry point and perform the
type check just once.
This replaces code like this:
if (CLUTTER_ACTOR_IS_VISIBLE (self))
clutter_actor_queue_redraw (self);
with:
clutter_actor_queue_redraw (self);
clutter_actor_queue_redraw internally knows what can be optimized when
the actor is not visible, but it also knows that the queue_redraw signal
must always be sent in case a ClutterClone is cloning a hidden actor.
• Add the function name in the warning, since the text is the same in
both clutter_actor_raise() and clutter_actor_lower().
• If an actor has a name then prefer it to the type name.
Since we're allowing allocation cycles saying that calling
queue_relayout() inside an allocation cycle "is not allowed" is kind of
confusing. We should say that "it is not recommended".
* device-manager: (37 commits)
x11: Re-enable XI1 extension keyboards
x11: Always handle core device events before XI events
docs: Documentation fixes for DeviceManager
device-manager: Fix the signals definition
docs: Add sections for InputDevice and DeviceManager
docs: Add clutter_input_device_get_device_name()
tests: Print out the device details on motion
Always register core devices
device: Remove unused is_default member
win32: Experimental implementation of device support
tests: Print the device name, as well as its Id
x11: Fill out the :name property of the InputDevices
device: Add the :name property to InputDevice
x11: Store core devices on the X11 Backend singleton
device: Unset the cursor actor when leaving the stage
device: Add pointer actor getter
x11: Discard the LeaveNotify for off-stage ButtonRelease
device: Do not overwrite the stage for an InputDevice
event: Off-stage button releases have a click count of 1
event: Scroll events do not have click count
...
The :opacity property is defined using a GParamSpecUchar. This usually
leads to issues with language bindings that don't have an 'unsigned
char' type and that need to explicitly handle the conversion between
G_TYPE_UCHAR and G_TYPE_INT or G_TYPE_UINT.
The property definition already specifies an interval size of [0, 255]
on the values; more importantly, GObject already implicitly transforms
between G_TYPE_UCHAR and G_TYPE_UINT (the GValue transformation
functions are registered at type system initialization time) so
switching between a GParamSpecUchar and a GParamSpecUint should not be
an ABI break.
I have tested a simple program using the opacity property before and
after the change and I cannot see any run-time warnings related to this
issue.
UProf is a small library that aims to help applications/libraries provide
domain specific reports about performance. It currently provides high
precision timer primitives (rdtsc on x86) and simple counters, the ability
to link statistics between optional components at runtime and makes report
generation easy.
This adds initial accounting for:
- Total mainloop time
- Painting
- Picking
- Layouting
- Idle time
The timing done by uprof is of wall clock time. It's not based on stochastic
samples we simply sample a counter at the start and end. When dealing with
the complexities of GPU drivers and with various kinds of IO this form of
profiling can be quite enlightening as it will be able to represent where
your application is blocking unlike tools such as sysprof.
To enable uprof accounting you must configure Clutter with --enable-profile
and have uprof-0.2 installed from git://git.moblin.org/uprof
If you want to see a report of statistics when Clutter applications exit you
should export CLUTTER_PROFILE_OUTPUT_REPORT=1 before running them.
Just a final word of caution; this stuff is new and the manual nature of
adding uprof instrumentation means it is prone to some errors when modifying
code. This just means that when you question strange results don't rule out
a mistake in the instrumentation. Obviously though we hope the benfits out
weigh e.g. by focusing on very key stats and by having automatic reporting.
Currently, ClutterActor detects a relayout cycle (an actor causing a
relayout to be queued from within an allocate() function) and aborts
after printing out a warning. This might be a little bit too anal
retentive, and it currently breaks GTK+ embedding inside clutter-gtk
so we should probably relax the behaviour a bit. Now we just emit the
warning but we still go ahead with the relayout.
* animate-layout-manager:
layout-manager: Document the animation support
layout-manager: Rewind the timeline in begin_animation()
box-layout: Remove the allocations hash table
docs: Clean up the README file
layout: Let begin_animation() return the Alpha
box-layout: Add knobs for controlling animations
box-layout: Animate layout properties
layout: Add animation support to LayoutManager
Add ActorBox animation methods
ClutterActor checks, when destroying and reparenting, if the parent
actor implements the Container interface, and automatically calls the
remove() method to perform a clean removal.
Actors implementing Container, though, might have internal children;
that is, children that are not added through the Container API. It is
already possible to iterate through them using the Container API to
avoid breaking invariants - but calling clutter_actor_destroy() on
these children (even from the Container implementation, and thus outside
of Clutter's control) will either lead to leaks or to segmentation
faults.
Clutter needs a way to distinguish a clutter_actor_set_parent() done on
an internal child from one done on a "public" child; for this reason, a
push/pop pair of functions should be available to Actor implementations
to mark the section where they wish to add internal children:
➔ clutter_actor_push_internal ();
...
clutter_actor_set_parent (child1, parent);
clutter_actor_set_parent (child2, parent);
...
➔ clutter_actor_pop_internal ();
The set_parent() call will automatically set the newly added
INTERNAL_CHILD private flag on each child, and both
clutter_actor_destroy() and clutter_actor_unparent() will check for the
flag before deciding whether to call the Container's remove method.
ClutterActorBox should have an interpolate() method that allows to
compute the intermediate values between two states, given a progress
value, e.g.:
clutter_actor_box_interpolate (start, end, alpha, &result);
Another utility method, useful for layout managers, is a modifier
that clamps the members of the actor box to the nearest integer
value.
Some actor implementation might avoid imposing any layout on their
children. The Actor base class usually assumes some sort of layout
management is in place, so it will queue relayouts when, for instance,
an actor is shown or is hidden. If the parent of the actor does not
impose any layout, though, showing or hiding one of its children will
not affect the layout of the others.
An example of this kind of container is ClutterGroup.
By adding a new Actor flag, CLUTTER_ACTOR_NO_LAYOUT, and by making
the Group actor set it on itself, the Actor base class can now decide
whether or not to queue a relayout. The flag is not meant to be used
by application code, and should only be set when implementing a new
container.
http://bugzilla.openedhand.com/show_bug.cgi?id=1838
clutter_actor_get_preferred_width/height currently caches only one size
requests, for a given height / width.
It's common for a layout manager to call get_preferred_width with 2
different heights during the same allocation cycle. Typically once in
the size request, once in the allocation. If
clutter_actor_get_preferred_width is called
alternatively with 2 different for_height, the cache is totally
inefficient, and we end up always querying the actor size even
when the actor does not need a re-allocation.
http://bugzilla.openedhand.com/show_bug.cgi?id=1876
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Every actor should have a property for retrieving (and setting) the
text direction.
The text direction is used to provide a consisten behaviour in both
left-to-right and right-to-left languages. For instance, ClutterText
should perform key navigation following text direction. Layout
managers should also take into account text direction to derive the
right packing order for their children.
cogl_clip_push, and cogl_clip_push_window_rect which are now deprecated were
used in various places internally so this just switches to using the
replacement functions.
If an actor calls directly or indirectly clutter_actor_queue_relayout()
on itself from within the allocate() implementation it will cause a
relayout cycle. This is usually a condition that should be checked by
ClutterActor and we should emit a warning if it is verified.
ClutterActor should check whether the current instance is being
destroyed and avoid performing operations like:
• queueing redraws
• queueing relayouts
It should also warn if the actor is being parented to an actor
currently being destroyed.
When showing a warning in the state checks we perform to verify that
the invariants are maintained when showing, mapping and realizing, we
should also print out the name of the actor failing the checks. If the
actor has no name, the GType name should be used as a fallback.
When an actor is hidden, the parent actor is not required to
size request or allocate it. (ClutterGroup does, but, for example,
NbtkBoxLayout doesn't.) This means that the
needs_width_request/needs_height_request/needs_allocate can be
stale when we go to show it again - they are set for the actor
but not the parent. Explicitly setting them to FALSE avoids
clutter_actor_relayout() improperly short-circuiting.
http://bugzilla.openedhand.com/show_bug.cgi?id=1831
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The IN_DESTRUCTION flag is set around the unrealization and disposal of
the actor in clutter_actor_destroy() but is never unset (it's set twice
instead).
Reviewed-by: Emmanuele Bassi <ebassi@linux.intel.com>
ClutterClone bases its preferred size on the preferred size of
the source actor, so it needs to invalid its cached preferred
size when the preferred size of the source actor changes.
In order for this to work, we need to have notification when
the size of the source actor changes, so add a ::queue-relayout
signal to ClutterActor.
Then connect to this from ClutterClone and queue a relayout
on the clone when a relayout is queued on the source.
http://bugzilla.openedhand.com/show_bug.cgi?id=1755
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
It is possible to unset the size of an actor specified with set_width()
and set_height() by using:
clutter_actor_set_size (actor, -1, -1);
Which works by unsetting the :min-*-set and the :natural-*-set properties.
Calling set_width(-1) and set_height(-1) separately, though, doesn't work
thus implicitly breaking the assumption that set_size() is nothing more
than set_width()+set_height(). This was obviously due to the face that
pre-1.0 set_width() and set_height() took an unsigned integer as an
argument.
ClutterActor parses positional and dimensional properties with a
custom deserializer. We need to:
- handle G_TYPE_INT64, the default integer type for JSON-GLib
- use G_TYPE_FLOAT for properties, since Actor switched to it
for the pixel-based ones
This makes ClutterScript work again.
The "catch all" warning for a the mapped invariant violation is too
generic: it doesn't tell you why the invariant was broken in case
we are trying to map an unparented actor - e.g. through a Clone.
The vertex that should be used by the apply_relative_transform
is the one passed in as const, and the result should be placed
inside the non-const ClutterVertext. Currently, we are using
the latter, and thus the function is completely useless.
It would be useful inside a custom actor's paint function to be able to
tell if this is a primary paint call, or if we are in fact painting on
behalf of a clone.
In Mutter we have an optimization not to paint occluded windows; this is
desirable for the windows per se, to conserve bandwith to the card, but
if something like an application switcher is using clones of these windows,
they will not get painted either; currently we have no way of
differentiating between the two.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1685
The clutter_actor_get_allocation_coords() is not used, and since
the switch to floats in the Actor's API, it returns exactly what
the get_allocation_box() returns.
Currently, the transformation matrix for an actor is constructed
from scenegraph-related accessors. An actor, though, can call COGL
API to add new transformations inside the paint() implementation,
for instance:
static void
my_foo_paint (ClutterActor *a)
{
...
cogl_translate (-scroll_x, -scroll_y, 0);
...
}
Unfortunately these transformations will be completely ignored by
the scenegraph machinery; for instance, getting the actor-relative
coordinates from event coordinates is going to break badly because
of this.
In order to make the scenegraph aware of the potential of additional
transformations, we need a ::apply_transform() virtual function. This
vfunc will pass a CoglMatrix which can be used to apply additional
operations:
static void
my_foo_apply_transform (ClutterActor *a, CoglMatrix *m)
{
CLUTTER_ACTOR_CLASS (my_foo_parent_class)->apply_transform (a, m);
...
cogl_matrix_translate (m, -scroll_x, -scroll_y, 0);
...
}
The ::paint() implementation will be called with the actor already
using the newly applied transformation matrix, as expected:
static void
my_foo_paint (ClutterActor *a)
{
...
}
The ::apply_transform() implementations *must* chain up, so that the
various transformations of each class are preserved. The default
implementation inside ClutterActor applies all the transformations
defined by the scenegraph-related accessors.
Actors performing transformations inside the paint() function will
continue to work as previously.
The clutter_actor_pick() function just emits the ::pick signal
on the actor. Nobody should be using it, since the paint() method
is already context sensitive and will result in a ::pick emission
by itself. The clutter_actor_pick() is just confusing things.
The clutter_context_get_default() function is private, but shared
across Clutter. For this reason, it should be prefixed by '_' so
that the symbol is hidden from the shared object.
ActorBox should have methods for easily extracting the X and Y
coordinates of the origin, and the width and height separately.
These methods will make it easier for high-level language bindings
to manipulate ActorBox instances and avoid the Geometry type.
The Vertex and ActorBox boxed types are meant to be used across
the API, but are fairly difficult to bind. Their memory management
is also unclear, and has to go through the indirection of
g_boxed_copy() and g_boxed_free().
We can't short-circuit the emission of ::queue-redraw for not-visible
actors, since ClutterClone uses that signal to know when things need
to be redrawn.
Calling clutter_actor_queue_redraw() out of clutter_actor_real_map() /
clutter_actor_real_unmap() was causing the flag state to get set
incorrectly from _clutter_actor_set_enable_paint_unmapped(), because
a paint queueing a redraw was not expected.
Moving queuing the redraw to clutter_actor_hide()/show() fixes this, and
also fixes a problem where showing a child of a cloned actor wouldn't
cause the clone to be repainted.
http://bugzilla.openedhand.com/show_bug.cgi?id=1484
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Due to the accumulation of floating point errors, natural_width
and min_width can diverge significantly even if the math for
computing them is correct. So just clamp natural_width to
min_width instead of warning about it.
http://bugzilla.openedhand.com/show_bug.cgi?id=1632
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we are cloning an source actor with an unmapped parent, then when
we temporarily map the source actor:
- We need to skip the check that a mapped actor has a mapped
parent.
- We need to realize the actor's parents before mapping it,
or we'll get an assertion failure in clutter_actor_update_map_state()
because an actor with an unmapped parent is !may_be_realized.
http://bugzilla.openedhand.com/show_bug.cgi?id=1633
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Clutter short-circuits painting when an actor's opacity is
zero. However if the actor is being painted from a ClutterClone then
it will be painted using the clone's opacity instead so the test was
broken.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Sometimes it is useful to be able to track changes in the allocation
flags, like the absolute origin, inside children of a container.
Using the notify::allocation signal is not enough, in these cases, so
we need a specific signal that gives us both the allocation box and the
allocation flags.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
Units as they have been implemented since Clutter 0.4 have always been
misdefined as "logical distance unit", while they were just pixels with
fractionary bits.
Units should be reworked to be opaque structures to hold a value and
its unit type, that can be then converted into pixels when Clutter needs
to paint or compute size requisitions and perform allocations.
The previous API should be completely removed to avoid collisions, and
a new type:
ClutterUnits
should be added; the ability to install GObject properties using
ClutterUnits should be maintained.
If the application code calls for destruction of an actor we need
to make sure that the actor is unrealized before running the dispose
sequence; otherwise, we might trigger an assertion failure on composite
actors.
All the underlying implementation and the public entry points have
been switched to floats; the only missing bits are the Actor properties
that deal with positioning and sizing.
This usually means a major pain when dealing with GValues and varargs
functions. While GValue will warn you when dealing with the wrong
conversions, varags will simply die an horrible (and hard to debug)
death via segfault. Nothing much to do here, except warn people in the
release notes and hope for the best.
The allocate_available_size() method is a convenience method in
the same spirit as allocate_preferred_size(). While the latter
will allocate the preferred size of an actor regardless of the
available size provided by the actor's parent -- and thus it's
suitable for simple fixed layout managers like ClutterGroup -- the
former will take into account the available size provided by the
parent and never allocate more than that; it is, thus, suitable
for simple fluid layout managers.
If we are short-circuiting the paint when the opacity is zero we still
need to clear the queued_redraw flag otherwise it won't be possible to
queue another redraw of the actor until something else has caused a
paint first.