Continuing in the tradition on making clutter_actor_animate() the
next g_object_connect(), here's the addition of the signal-after::
and signal-swapped:: modifiers for the automagic signal connection
arguments.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1646
In order to be ready for the next major version of GLib we need to
disable single header inclusion by using the G_DISABLE_SINGLE_INCLUDES
define in the build process.
We can't short-circuit the emission of ::queue-redraw for not-visible
actors, since ClutterClone uses that signal to know when things need
to be redrawn.
Calling clutter_actor_queue_redraw() out of clutter_actor_real_map() /
clutter_actor_real_unmap() was causing the flag state to get set
incorrectly from _clutter_actor_set_enable_paint_unmapped(), because
a paint queueing a redraw was not expected.
Moving queuing the redraw to clutter_actor_hide()/show() fixes this, and
also fixes a problem where showing a child of a cloned actor wouldn't
cause the clone to be repainted.
http://bugzilla.openedhand.com/show_bug.cgi?id=1484
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we have an not-visible texture pixmap, we need to:
- Still update it if it is realized, since it won't be
updated when shown. And it might be also be cloned.
- Queue a redraw if even if not visible, since it
it might be cloned.
http://bugzilla.openedhand.com/show_bug.cgi?id=1647
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
My patch to choose a premultiplied format when the user gives
COGL_PIXEL_FORMAT_ANY for the internal_format broke the case where the data
in question doesn't have and alpha channel.
This was accidentally missed when merging the premultiplication branch
since I merged a local version of the branch that missed this commit.
Although the underlying materials should allow layers with INVALID_HANDLES
it shouldn't be necissary to expose that via cogl_set_source_texture() and
it's easier to resolve a warning/crash here than odd artefacts/crashes later
in the pipeline.
Merge branch 'premultiplication'
[cogl-texture docs] Improves the documentation of the internal_format args
[test-premult] Adds a unit test for texture upload premultiplication semantics
[fog] Document that fogging only works with opaque or unmultipled colors
[test-blend-strings] Explicitly request RGBA_888 tex format for test textures
[premultiplication] Be more conservative with what data gets premultiplied
[bitmap] Fixes _cogl_bitmap_fallback_unpremult
[cogl-bitmap] Fix minor copy and paste error in _cogl_bitmap_fallback_premult
Avoid unnecesary unpremultiplication when saving to local data
Don't unpremultiply Cairo data
Default to a blend function that expects premultiplied colors
Implement premultiplication for CoglBitmap
Use correct texture format for pixmap textures and FBO's
Add cogl_color_premultiply()
Clarifies that if you give COGL_PIXEL_FORMAT_ANY as the internal format for
cogl_texture_new_from_file or cogl_texture_new_from_data then Cogl will
choose a premultiplied internal format.
The fixed function fogging provided by OpenGL only works with unmultiplied
colors (or if the color has an alpha of 1.0) so since we now premultiply
textures and colors by default a note to this affect has been added to
clutter_stage_set_fog and cogl_set_fog.
test-depth.c no longer uses clutter_stage_set_fog for this reason.
In the future when we can depend on fragment shaders we should also be
able to support fogging of premultiplied primitives.
We don't want to force texture data to be premultipled if the user
explicitly specifies a non premultiplied internal_format such as
COGL_PIXEL_FORMAT_RGBA_8888. So now Cogl will only automatically
premultiply data when COGL_PIXEL_FORMAT_ANY is given for the
internal_format, or a premultiplied internal format such as
COGL_PIXEL_FORMAT_RGBA_8888_PRE is requested but non-premultiplied source
data is given.
This approach is consistent with OpenVG image formats which have already
influenced Cogl's pixel format semantics.
The _cogl_unpremult_alpha_{first,last} functions which
_cogl_bitmap_fallback_unpremult depends on were incorrectly casting each
of the byte components of a texel to a gulong and performing shifts as
if it were dealing with the whole texel.
It now just uses array indexing to access the byte components without
needing to cast or manually shift any bits around.
Even though we used to depend on unpremult whenever we used a
ClutterCairoTexture, clutter_cairo_texture_context_destroy had it's own
unpremult code which worked which is why this bug wouldn't have been noticed
before.
Now that we typically have premultiplied data stored in Cogl
textures, when fetching a texture into local data for temporary
storage, use a premultiplied format to avoid an unpremultiply/
premultiply roundtrip.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Instead of unpremultiplying the Cairo data, pass it directly to Cogl
in premultiplied form; we now *prefer* premultiplied data.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Many operations, like mixing two textures together or alpha-blending
onto a destination with alpha, are done most logically if texture data
is in premultiplied form. We also have many sources of premultiplied
texture data, like X pixmaps, FBOs, cairo surfaces. Rather than trying
to work with two different types of texture data, simplify things by
always premultiplying texture data before uploading to GL.
Because the default blend function is changed to accommodate this,
uses of pure-color CoglMaterial need to be adapted to add
premultiplication.
gl/cogl-texture.c gles/cogl-texture.c: Always premultiply
non-premultiplied texture data before uploading to GL.
cogl-material.c cogl-material.h: Switch the default blend functions
to ONE, ONE_MINUS_SRC_ALPHA so they work correctly with premultiplied
data.
cogl.c: Make cogl_set_source_color() premultiply the color.
cogl.h.in color-material.h: Add some documentation about
premultiplication and its interaction with color values.
cogl-pango-render.c clutter-texture.c tests/interactive/test-cogl-offscreen.c:
Use premultiplied colors.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
cogl-bitmap.c cogl-bitmap-pixbuf.c cogl-bitmap-fallback.c cogl-bitmap-private.h:
Add _cogl_bitmap_can_premult(), _cogl_bitmap_premult() and implement
a reasonably fast implementation in the "fallback" code.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
RGBA data in X pixmaps and in FBOs is already premultiplied; use
the right format when creating cogl textures.
http://bugzilla.openedhand.com/show_bug.cgi?id=1406
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Since commit d743aeaa updated the internal copy of JSON-GLib and
added a new private header file, we need to fix the build to avoid
a distcheck failure.
Merge branch 'master-clock-updates'
* master-clock-updates: (22 commits)
Change the paint forcing on the Text cache text
[timelines] Improve marker hit check and don't fudge the delta
Revert "[timeline] Don't clamp the elapsed time when a looping tl reaches the end"
[tests] Don't add a newline to the end of g_test_message calls
[test-timeline] Add a marker at the beginning of the timeline
[timeline] Don't clamp the elapsed time when a looping tl reaches the end
[master-clock] Throttle if no redraw was performed
[docs] Update Clutter's API reference
Force a paint instead of calling clutter_redraw()
Fix clutter_redraw() to match the redraw cycle
Run the repaint functions inside the redraw cycle
Remove useless manual timeline ticking
Move elapsed-time calculations into ClutterTimeline
Limit the frame rate when not syncing to VBLANK
Decrease the main-loop priority of the frame cycle
Avoid motion-compression in test-picking test
Compress events as part of the frame cycle
Remove stage update idle and do updates from the master clock
Call g_main_context_wakeup() when we start running timelines
Remove unused msecs_delta member
...
Markers added at the start of the timeline need to be special cased
because we effectively check whether the marker time is greater than
the old frame time or less than or equal to the new frame time but we
never actually emit a frame for the start of the timeline.
If the timeline is looping then it adjusts the position to interpolate
a wrapped around position. However we do not emit a new frame after
setting this position so we need to check for markers again there.
clutter_timeline_get_delta should return the actual wall clock time
between emissions of the new-frame signal - even if the elapsed time
has been fudged at the end of the timeline. To make this work it no
longer directly manipulates priv->msecs_delta but instead passes a
delta value to check_markers.
Someone might hide the previously focused actor in the focus-out signal
handler and as key focus still appears to be on that actor we'd get
re-entrant call and get glib critical from g_object_weak_unref
This changes clutter_stage_get_key_focus() to return stage/NULL during
focus-out signal emission. It used to be the actor focus-out was being
emitted on.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1547
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This reverts commit 9c5663d671.
The patch was causing problems for applications that expect the
elapsed_time to be at either end of the timeline when the completed
signal is fired. For example test-behave swaps the direction of the
timeline in the completed handler but if the time has overflowed the
end then the timeline would only take a short time to get back the
beginning. This caused the animation to just vibrate around the
beginning.
The new-frame signal of a timeline was previously guaranteed to be
emitted with the elapsed_time set to the end before it emits the
completed signal. This doesn't necessarily make sense for looping
timelines because it would cause the elapsed time to be clamped to a
slightly off value whenever the timeline restarts. This patch makes it
perform the wrap around before emitting the new-frame signal so that
the elapsed time always corresponds to the time elapsed since the
timeline was started.
Additionally it no longer fudges the msecs_delta property to make the
marker check work so clutter_timeline_get_delta will always return the
wall clock time since the last frame.
A flag in the master clock is now set whenever the dispatch caused an
actual redraw of a stage. If this flag is not set during the prepare
and check functions then it will resort to limiting the redraw
attempts to the default frame rate as if vblank syncing was
disabled. Otherwise if a timeline is running that does not cause the
scene to change then it would busy-wait with 100% CPU until the next
frame.
This fix was suggested by Owen Taylor in:
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We do not need the whole redraw machinery inside
clutter_stage_read_pixels(): instead, we want Clutter to drop everything,
paint and call glReadPixels() with the current buffer.
The clutter_redraw() function is used by embedding toolkits to
force a redraw on a stage. Since everything is performed by
toggling a flag inside the Stage itself and then letting the
master clock advance, we need a ClutterStage method to ensure
that we start the master clock and redraw.
Instead of calculating a delta in the master clock, and passing that
into each timeline, make each timeline individually responsible for
remembering the last time and computing the delta.
This:
- Fixes a problem where we could spin infinitely processing
timeline-only frames with < 1msec differences.
- Makes timelines consistently start timing on the first frame;
instead of doing different things for the first started timeline
and other timelines.
- Improves accuracy of elapsed time computations by avoiding
accumulating microsecond => millisecond truncation errors.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
clutter-master-clock.c clutter-master-clock.h: When the
SYNC_TO_VBLANK feature is not available, wait for 1/frame_rate
seconds since the start of the last frame before drawing the next
frame. Add _clutter_master_clock_start_running() to abstract
the usage of g_main_context_wakeup()
clutter-stage.c: Add _clutter_master_clock_start_running()
clutter-main.c: Update docs for clutter_set_default_frame_rate()
clutter_get_default_frame_rate() to no longer talk about timeline
frame rates.
test-text-perf.c test-text.c: Set a frame rate of 1000fps so that
frame-rate limiting doesn't affect the result.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Change CLUTTER_PRIORITY_REDRAW to be lower than the GTK+ resize
and relayout priorities to avoid starving GTK+ when run in the
same process as clutter.
Remove the unused CLUTTER_PRIORITY_TIMELINE
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of trying to guess about which motion events are
extraneous, queue up all events until we process a frame.
This allows us to look ahead and reliably compress consecutive
sequence of motion events.
clutter-main.c: Feed received events to the stage for queueing.
Remove old compression code. Remove clutter_get_motion_events_frequency()
clutter_set_motion_events_frequency()
clutter-stage.c: Keep a queue of pending events.
clutter-master-clock.c: Add processng of queued events to the
clock source dispatch function.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
When a redraw is queued on a stage, simply set a flag; then in
the check/prepare functions of the master clock source, check
for stages that need redrawing.
This avoids the complexity of having multiple competing sources
at the same priority and makes the update ordering more reliable and
understandable.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If a timeline is added from a different thread, we need to
call g_main_context_wakeup() to wake the main thread up to
start updating the timeline.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Instead of keeping a list of all timelines, and connecting to
signals and weak notifies, simply keep a list of running timelines;
this greatly simplifies both the book-keeping, and also determining
if there are any running timelines.
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Remove code to advance the master clock after drawing a frame; if
there are any running timelines the master clock will do another
frame by itself, and the clock will be advanced before running
that frame.
With this change, there is no point in queueing an extra frame
redraw after completing a timeline, since we are always advancing
the timeline *before* redrawing, so remove that code as well.
(This does mean that calling clutter_timeline_stop() won't implicitly
cause the stage to be redrawn; this doesn't seem like something
an app should rely on in any case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1637
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The clutter_stage_fullscreen() and clutter_stage_unfullscreen() are
a GDK-ism. The underlying implementation is already using an accessor
with a boolean parameter.
This should take the amount of collisions between properties, methods
and signals to zero.
The :fullscreen property is very much confusing as it is implemented.
It can be written to a value, but the whole process might fail. If we
set:
g_object_set (stage, "fullscreen", TRUE, NULL);
and the fullscreen process fails or it is not implemented, the value
will be reset to FALSE (if we're lucky) or left TRUE (most of the
times).
The writability is just a shorthand for invoking clutter_stage_fullscreen()
or clutter_stage_unfullscreen() depending on a boolean value without
using an if.
The :fullscreen property also greatly confuses high level languages,
since the same symbol is used:
- for a method name (Clutter.Stage.fullscreen())
- for a property name (Clutter.Stage.fullscreen)
- for a signal (Clutter.Stage::fullscreen)
For these reasons, the :fullscreen should be renamed to :fullscreen-set
and be read-only. Implementations of the Stage should only emit the
StageState event to change from normal to fullscreen, and the Stage
will automatically update the value of the property and emit a notify
signal for it.
There have been changes in JSON-GLib upstream to clean up the
data structures, and facilitate introspection.
We still not use the updated JsonParser with the (private) JsonScanner
code, since it's a fork of GLib's GScanner.
Otherwise if there is an error before the slices are created it will
try to free the first_pixels array and crash.
It now also checks whether first_pixels has been created before using
it to update the mipmaps. This should only happen for
cogl_texture_new_from_foreign and doesn't matter if the FBO extension
is available. It would be better in this case to fetch the first pixel
using glGetTexImage as Owen mentioned in the last commit.
tex->first_pixels was never set for foreign textures, leading
to a crash when the texture object is freed.
As a quick fix, simply set to NULL. A more complete fix would
require remembering if we had ever seen the first pixel uploaded,
and if not, doing a glReadPixel to get it before triggering the
mipmap update.
http://bugzilla.openedhand.com/show_bug.cgi?id=1645
Signed-off-by: Neil Roberts <neil@linux.intel.com>
It's very common that there's no reasonable fallback to do if the
blend or combine string you set isn't supported. So, rather than
requiring everybody to pass in a GError purely to catch syntax erorrs,
automatically g_warning() if a parse error is encountered and @error
is NULL.
http://bugzilla.openedhand.com/show_bug.cgi?id=1642
Signed-off-by: Robert Bragg <robert@linux.intel.com>
When we complete a timeline, we clamp the elapsed_time variable
to the range of the timeline. We need to adjust msecs_delta so that
when we check for hit markers we have the correct interval.
http://bugzilla.openedhand.com/show_bug.cgi?id=1641
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The Animation should be referenced during the notification of the
alpha value, since the callback is invoked depending on the Alpha
and it won't vivify the Animation instance for us.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1537
ClutterEvent is not really gobject-introspection friendly because
of the whole discriminated union thing. In particular, if you get
a ClutterEvent in a signal handler, you probably can't access the
event-type-specific fields, and you probably can't call methods
like clutter_key_event_symbol() either, because you can't cast the
ClutterEvent to a ClutterKeyEvent.
The cleanest solution is to turn every accessor into ClutterEvent
methods, accepting a ClutterEvent* and internally checking the event
type.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1585
According to clutter_texture_set_cogl_texture you should unref the handle as
the texture takes its own.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
The OpenGL spec states that if you create a pixmap using glXCreatePixmap you
should use glXDestroyPixmap to destroy it.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Setting the pixmap for an unrealized ClutterGLXTexturePixmap should
not cause it to be realized, and certainly shouldn't cause the the
REALIZED flag to be set without using clutter_actor_realize().
This patch uses the simple approach that;
- pixmap changes on an unrealized ClutterGLXTexturePixmap
are ignored
- when the ClutterGLXTexturePixmap is realized, we then create
the GLXPixmap and the corresponding texture.
The call to clutter_glx_texture_pixmap_update_area() is moved
from create_cogl_texture() to
clutter_glx_texture_pixmap_create_glx_pixmap() since
create_cogl_texture() is only called from one place, and updating
the area is really something we do *after* creating the texture,
not part of creating the texture.
clutter_glx_texture_pixmap_create_glx_pixmap() is reorganized a
bit to avoid debug-logging confusingly if it's called before a pixmap
has been set, and for readability.
http://bugzilla.openedhand.com/show_bug.cgi?id=1635
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
An implementaton of realize() never needs to set the
CLUTTER_ACTOR_REALIZED flag, though it can unset the flag if
things fail unexpectedly. (Previously, stage backend implementations
had to do this since clutter_actor_realize() wasn't used; this
is no longer the case.)
http://bugzilla.openedhand.com/show_bug.cgi?id=1634
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Due to the accumulation of floating point errors, natural_width
and min_width can diverge significantly even if the math for
computing them is correct. So just clamp natural_width to
min_width instead of warning about it.
http://bugzilla.openedhand.com/show_bug.cgi?id=1632
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we use float temporaries when computing the bounds of
a group, then, depending on what variables are kept in registers
and what stored on the stack, the accumulated difference between
natural_width and min_width can be more than FLOAT_EPSILON.
Using double temporaries will eliminate the difference in most
cases, or, very rarely, reduce it to a last-bit error.
http://bugzilla.openedhand.com/show_bug.cgi?id=1632
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If we are cloning an source actor with an unmapped parent, then when
we temporarily map the source actor:
- We need to skip the check that a mapped actor has a mapped
parent.
- We need to realize the actor's parents before mapping it,
or we'll get an assertion failure in clutter_actor_update_map_state()
because an actor with an unmapped parent is !may_be_realized.
http://bugzilla.openedhand.com/show_bug.cgi?id=1633
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage size using clutter_actor_set_size() is almost always
wrong: the X11 stage implementation should save the size and queue a
relayout -- like it does when receiving a ConfigureNotify. The same
should happen when setting it to be full screen.
Since we build the Cogl GIR inside /clutter/cogl we should be looking
there when building the Clutter GIR. Otherwise g-ir-scanner will look
inside the gir directory -- and if you never built Clutter before it
will error out.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1638
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The load-finished signal has a GError* argument which is meant to
signify whether the loading was successful. However many of the
places in ClutterTexture that emit this signal directly pass their
'error' variable which is a GError** and will be NULL or not
completely independently of whether there was an error. If the
argument was dereferenced it would probably crash.
The test-texture-async interactive test case should also verify
that the ::load-finished signal is correctly emitted.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1622
Correctly apply De Morgan's laws to the short-circuit test in
clutter_timeline_pause(); it was short-circuiting always and
never actually pausing.
http://bugzilla.openedhand.com/show_bug.cgi?id=1629
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The commit 2c95b378 prevents clutter_animation_setup_property from being
called with fixed:: property names. This patch adds a additional
parameter "is_fixed" to clutter_animation_setup_property instead of
searching for "fixed::" in property_name.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
It should be possible render a single PangoLayout with different
colors without recalculating the layout. This was not working because
the color used at the first edit was being stored in the display
list. This broke changing the opacity on a ClutterText.
Now each node in the display list has a 'color override' flag which
marks whether it should use the base color or not. The base color is
now passed in from _cogl_pango_display_list_render_texture. The alpha
value is always taken from the base color.
The clutter_redraw() function is used by libraries embedding
Clutter inside another toolkit, instead of queueing a redraw
on the embedded stage. This means that clutter_redraw() should
perform the same sequence of actions done by the redraw idle
callback.
Clutter short-circuits painting when an actor's opacity is
zero. However if the actor is being painted from a ClutterClone then
it will be painted using the clone's opacity instead so the test was
broken.
* 1.0-integration: (138 commits)
[x11] Disable XInput by default
[xinput] Invert the XI extension version check
[cogl-primitives] Fix an unused variable warning when building GLES
[clutter-stage-egl] Pass -1,-1 to clutter_stage_x11_fix_window_size
Update the GLES backend to have the layer filters in the material
[gles/cogl-shader] Add a missing semicolon
[cogl] Move the texture filters to be a property of the material layer
[text] Fix Pango unit to pixels conversion
[actor] Force unrealization on destroy only for non-toplevels
[x11] Rework map/unmap and resizing
[xinput] Check for the XInput entry points
[units] Validate units against the ParamSpec
[actor] Add the ::allocation-changed signal
[actor] Use flags to control allocations
[units] Rework Units into logical distance value
Remove a stray g_value_get_int()
Remove usage of Units and macros
[cogl-material] Allow setting a layer with an invalid texture handle
[timeline] Remove the concept of frames from timelines
[gles/cogl-shader] Fix parameter spec for cogl_shader_get_info_log
...
Conflicts:
configure.ac
The XInput support in Clutter is still using XI 1.x. This will never
work correctly, and we are all waiting for XInput 2 anyway. The changes
internally should be minimal, so we can leave everything in place, but
it's better to disable XInput support by default -- at least for the
time being.
The texture filters are now a property of the material layer rather
than the texture object. Whenever a texture is painted with a material
it sets the filters on all of the GL textures in the Cogl texture. The
filter is cached so that it won't be changed unnecessarily.
The automatic mipmap generation has changed so that the mipmaps are
only generated when the texture is painted instead of every time the
data changes. Changing the texture sets a flag to mark that the
mipmaps are dirty. This works better if the FBO extension is available
because we can use glGenerateMipmap. If the extension is not available
it will temporarily enable automatic mipmap generation and reupload
the first pixel of each slice. This requires tracking the data for the
first pixel.
The COGL_TEXTURE_AUTO_MIPMAP flag has been replaced with
COGL_TEXTURE_NO_AUTO_MIPMAP so that it will default to
auto-mipmapping. The mipmap generation is now effectively free if you
are not using a mipmap filter mode so you would only want to disable
it if you had some special reason to generate your own mipmaps.
ClutterTexture no longer has to store its own copy of the filter
mode. Instead it stores it in the material and the property is
directly set and read from that. This fixes problems with the filters
getting out of sync when a cogl handle is set on the texture
directly. It also avoids the mess of having to rerealize the texture
if the filter quality changes to HIGH because Cogl will take of
generating the mipmaps if needed.
The mapping and unmapping of the X11 stage implementation is
a bit bong. It's asynchronous, for starters, when it really
can avoid it by tracking the state internally.
The ordering of the map/unmap sequence is also broken with
respect to the resizing.
By tracking the state internally into StageX11 we can safely
remove the MapNotify and UnmapNotify X event handling.
In theory, we should use _NET_WM_STATE a lot more, and reuse
the X11 state flags for fullscreening as well.
Apparently, the XInput extension is using the same pkg-config
file ('xi') for both the 1.x and the 2.x API, so we need to
check for both the 1.x XGetExtensionVersion and the 2.x
XQueryInputVersion.
When declaring a property using ClutterParamSpecUnits we pass a
default type to limit the type of units we accept as valid values
for the property.
This means that we need to add the unit type check as part of the
validation process.
Sometimes it is useful to be able to track changes in the allocation
flags, like the absolute origin, inside children of a container.
Using the notify::allocation signal is not enough, in these cases, so
we need a specific signal that gives us both the allocation box and the
allocation flags.
Instead of passing a boolean value, the ::allocate virtual function
should use a bitmask and flags. This gives us room for expansion
without breaking API/ABI, and allows to encode more information to
the allocation process instead of just changes of absolute origin.
Units as they have been implemented since Clutter 0.4 have always been
misdefined as "logical distance unit", while they were just pixels with
fractionary bits.
Units should be reworked to be opaque structures to hold a value and
its unit type, that can be then converted into pixels when Clutter needs
to paint or compute size requisitions and perform allocations.
The previous API should be completely removed to avoid collisions, and
a new type:
ClutterUnits
should be added; the ability to install GObject properties using
ClutterUnits should be maintained.
It was previously possible to create a material layer with no texture
by setting some property on it such as the matrix. However it was not
possible to get back to that state without removing the layer and
recreating it. It is useful to be able to remove the texture to free
resources without forgetting the state of the layer so we can put a
different texture in later.
Timelines no longer work in terms of a frame rate and a number of
frames but instead just have a duration in milliseconds. This better
matches the working of the master clock where if any timelines are
running it will redraw as fast as possible rather than limiting to the
lowest rated timeline.
Most applications will just create animations and expect them to
finish in a certain amount of time without caring about how many
frames are drawn. If a frame is going to be drawn it might as well
update all of the animations to some fraction of the total animation
rather than rounding to the nearest whole frame.
The 'frame_num' parameter of the new-frame signal is now 'msecs' which
is a number of milliseconds progressed along the
timeline. Applications should use clutter_timeline_get_progress
instead of the frame number.
Markers can now only be attached at a time value. The position is
stored in milliseconds rather than at a frame number.
test-timeline-smoothness and test-timeline-dup-frames have been
removed because they no longer make sense.
The clutter_actor_map and unmap functions need to be called to
properly update the mapped state. This matches the changes to the X11
stage in 125bded8.
If the application code calls for destruction of an actor we need
to make sure that the actor is unrealized before running the dispose
sequence; otherwise, we might trigger an assertion failure on composite
actors.
The commit 762873e79e is completely
and utterly wrong and I should have never pushed it.
Serves me well for trying to work on three different branches and
on three different things.
Currently, the clock source spins a redraw every time there is at
least a timeline running. If the timelines were not advanced in
the previous frame, though, because their interval is larger than
the vblanking interval then this will lead to excessive redraws of
the scenegraph even if nothing has changed.
To avoid this a simple guard should be set by the MasterClock::advance
method in case no timeline was effectively advanced, and checked
before dispatching the stage redraws.
When creating a Cogl texture from a Cogl bitmap it would steal the
data by setting the bitmap_owner flag and clearing the data pointer
from the bitmap. The data would be freed by the time the
new_from_bitmap is finished. There is no reason to do this because the
data will be freed when the Cogl bitmap is unref'd and it is confusing
not to be able to reuse the bitmap for creating multiple textures.
clutter_color_from_string() only supported the "#rrggbbaa" format with
alpha channel, this patch adds support for "#rgba".
Colors in "#rrggbb" format were parsed manually, this is now left to
the pango color parsing fallback, since that's handling it just fine.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The cogl_shader_get_info_log() function is very inconvenient for
language bindings and for regular use, as it requires a static
buffer to be filled -- basically just providing a wrapper around
glGetInfoLogARB().
Since COGL aims to be a more convenient API than raw GL we should
just make cogl_shader_get_info_log() return an allocated string
with the GLSL compiler log.
Instead of using GL_TRIANGLES and uploading the indices every time, it
now uses GL_QUADS instead on OpenGL. Under GLES it still uses indices
but it uses the new cogl_vertex_buffer_indices_get_for_quads function
to avoid uploading the vertices every time.
This requires the _cogl_vertex_buffer_indices_pointer_from_handle
function to be exposed privately to the rest of Cogl.
The static_indices array has been removed from the Cogl context.
The GIR file for Clutter still contains symbols from COGL, even
though we provide a Cogl GIR as well. The Clutter GIR should
depend on the Cogl GIR instead.
All the underlying implementation and the public entry points have
been switched to floats; the only missing bits are the Actor properties
that deal with positioning and sizing.
This usually means a major pain when dealing with GValues and varargs
functions. While GValue will warn you when dealing with the wrong
conversions, varags will simply die an horrible (and hard to debug)
death via segfault. Nothing much to do here, except warn people in the
release notes and hope for the best.
The documentation for ClutterTexture's set_from_rgb_data() and
set_from_yuv_data() says:
Note: This function is likely to change in future versions.
This is not true, since they'll remain for the whole 1.x API cycle.
Now that CoglVertexBuffers support indices we can use them with GLES
to avoid duplicating vertices. Regular GL still uses GL_QUADS because
it is shown to still have a performance benefit over indices with the
Intel drivers.
This function can be used as an efficient way of drawing groups of
quads without using GL_QUADS. It generates a VBO containing the
indices needed to render using pairs of GL_TRIANGLES. The VBO is
globally cached so that it only needs to be uploaded whenever more
indices are requested than ever before.
The allocate_available_size() method is a convenience method in
the same spirit as allocate_preferred_size(). While the latter
will allocate the preferred size of an actor regardless of the
available size provided by the actor's parent -- and thus it's
suitable for simple fixed layout managers like ClutterGroup -- the
former will take into account the available size provided by the
parent and never allocate more than that; it is, thus, suitable
for simple fluid layout managers.
The cogl-enum-types.h file is created by glib-mkenums under
/clutter/cogl/common, and then copied in /clutter/cogl in order
to make the inclusion of that file work inside cogl.h.
Since we're copying it in a different location, the Makefile
for that location has to clean up the copy.
Notifications should be fired off from both the internal timeline and
the wrapping animation here, so notifiers should be frozen around these
property setters.
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Just a couple of final cleanups after the reimplementation of the
Animation model.
i) _set_mode does not need to set the timeline on the alpha
ii) freeze notifications around the setting of a new alpha
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The "started" signal is sent first after the timeline has been set to the
'running' state. For this reason, checking if the clock has any running
timelines running will always return true in the "started" signal handler:
the timeline that sent the signal is running.
What needs to be checked in the signal handler is if there are any
timelines running other than the one that emitted the ::started signal,
which we know is running anyway.
This prevents frames from being lost at the beginning of an animation when
a timeline is started after a quiescent period.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1617
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
We avoid rebuilding cogl-enum-types.h and cogl-enum-types.c by
using a "guard" -- a stamp file that will block Makefile. Since
we need cogl-enum-types.h into /clutter/cogl as well for the
cogl.h include to work, if we copy the cogl-enum-types.h
unconditionally it will cause a rebuild of the whole COGL; which
will cause a full rebuild.
To solve this, we can copy the header file when generating it
under the stamp file.
The libclutter-cogl internal object should be the only dependency
for Clutter, since we are already copying it inside clutter/cogl
for the introspection scanner. For this reason, the backend-specific,
real internal object should be built with the backend encoded into
the file name, like libclutter-common. This makes the build output
a little bit more clear: instead of having two:
LINK libclutter-cogl-common.la
...
LINK libclutter-cogl.la
LINK libclutter-cogl.la
We'll have:
LINK libclutter-cogl-common.la
...
LINK libclutter-cogl-gl.la
LINK libclutter-cogl.la
Same applies for the GLES backend.
Just like we do with GObject types and G_DEFINE_TYPE, we should
use the g_once_init_enter/g_once_init_leave mechanism to make the
GType registration of enumeration types thread safe.
The setup_viewport() function should only be used by Clutter and
not by application code.
It can be emulated by changing the Stage size and perspective and
requeueing a redraw after calling clutter_stage_ensure_viewport().
The backface culling enabling function was split and renamed, just
like the depth testing one, so we need to add the macro to the
cogl-deprecated.h header.
Previously indices were tightly bound to a particular Cogl vertex buffer
but we would like to be able to share indices so now we have
cogl_vertex_buffer_indices_new () which returns a CoglHandle.
In particular we could like to have a shared set of indices for drawing
lists of quads that can be shared between the pango renderer and the
Cogl journal.
At the moment Cogl doesn't do much batching of quads so most of the time we
are flushing a single quad at a time. This patch simplifies how we submit
those quads to OpenGL by using glDrawArrays with GL_TRIANGLE_FAN mode
instead of sending indexed vertices using GL_TRIANGLES mode.
Note: I hope to follow up soon with changes that improve our batching and
also move the indices into a VBO so they don't need to be re-validated every
time we call glDrawElements.
To assist people porting code from 0.8, the cogl_texture_* functions that
have been replaced now have defines that give some hint as to how they
should be replaced.
cogl_enable_depth_test and cogl_enable_backface_culling have been renamed
and now have corresponding getters, the new functions are:
cogl_set_depth_test_enabled
cogl_get_depth_test_enabled
cogl_set_backface_culling_enabled
cogl_get_backface_culling_enabled
This adds cogl_matrix api for multiplying matrices either by a perspective
or ortho projective transform. The internal matrix stack and current-matrix
APIs also have corresponding support added.
New public API:
cogl_matrix_perspective
cogl_matrix_ortho
cogl_ortho
cogl_set_modelview_matrix
cogl_set_projection_matrix
cogl_create_context is dealt with internally when _cogl_get_default context
is called, and cogl_destroy_context is currently never called.
It might be nicer later to get an object back when creating a context so
Cogl can support multiple contexts, so these functions are being removed
from the API until we get a chance to address context management properly.
For now cogl_destroy_context is still exported as _cogl_destroy_context so
Clutter could at least install a library deinit handler to call it.
Originally cogl_vertex_buffer_add_indices let the user pass in their own unique
ID for the indices; now the Id is generated internally and returned to the
caller.
It's now possible to add arrays of indices to a Cogl vertex buffer and
they will be put into an OpenGL vertex buffer object. Since it's quite
common for index arrays to be static it saves the OpenGL driver from
having to validate them repeatedly.
This changes the cogl_vertex_buffer_draw_elements API: It's no longer
possible to provide a pointer to an index array at draw time. So
cogl_vertex_buffer_draw_elements now takes an indices identifier that
should correspond to an idendifier returned when calling
cogl_vertex_buffer_add_indices ()
This is being removed before we release Clutter 1.0 since the implementation
wasn't complete, and so we assume no one is using this yet. Util we have
someone with a good usecase, we can't pretend to support breaking out into
raw OpenGL.
There were a number of functions intended to support creating of new
primitives using materials, but at this point they aren't used outside of
Cogl so until someone has a usecase and we can get feedback on this
API, it's being removed before we release Clutter 1.0.
This removes the following API:
cogl_material_set_blend_factors
cogl_material_set_layer_combine_function
cogl_material_set_layer_combine_arg_src
cogl_material_set_layer_combine_arg_op
These were rather awkward to use, so since it's expected very few people are
using them at this point and it should be straight forward to switch over
to blend strings, the API is being removed before we release Clutter 1.0.
Setting up layer combine functions and blend modes is very awkward to do
programatically. This adds a parser for string based descriptions which are
more consise and readable.
E.g. a material layer combine function could now be given as:
"RGBA = ADD (TEXTURE[A], PREVIOUS[RGB])"
or
"RGB = REPLACE (PREVIOUS)"
"A = MODULATE (PREVIOUS, TEXTURE)"
The simple syntax and grammar are only designed to expose standard fixed
function hardware, more advanced combining must be done with shaders.
This includes standalone documentation of blend strings covering the aspects
that are common to blending and texture combining, and adds documentation
with examples specific to the new cogl_material_set_blend() and
cogl_material_layer_set_combine() functions.
Note: The hope is to remove the now redundant bits of the material API
before 1.0
After long deliberation, the Animation class handling of the
:mode, :duration and :loop properties, as well as the conditions
for creating the Alpha and Timeline instances, came out as far too
complicated for their own good.
This is a rework of the API/parameters matrix and behaviour:
- :mode accessors will create an Alpha, if needed
- :duration and :loop accessors will create an Alpha and a Timeline
if needed
- :alpha will set or unset the Alpha
- :timeline will set or unset the Timeline
Plus, more documentation on the Animation class itself.
Many thanks to Jonas Bonn <jonas@southpole.se> for the feedback
and the ideas.
The Animatable interface implementation will always have the computed
value applied, whilst the non-Animatable objects go through the
interval validation first to avoid incurring in assertions and
warnings.
The Animatable::animate_property() should also be able to validate the
property it's supposed to interpolate, and eventually discard it. This
requires adding a return value to the virtual function (and its wrapper
function).
The Animation code will then apply the computed value only if the
animate_property() returns TRUE -- unifying the code path with the
non-Animatable objects.
The Animation class should proxy the :mode, :duration and :loop
properties whenever possible, to avoid them going out of sync when
changed using the Alpha and Timeline instances directly.
Currently, if Timeline:duration is changed, querying Animation:duration
will yield the old value, but the animation itself (being driven by
the Timeline) will use the Timeline's :duration new value. This holds
for the :loop and :mode properties as well.
Instead, the getters for the Animation's :duration, :loop and
:mode properties should ask the relevant object -- if any. The
loop, duration and mode values inside AnimationPrivate should only
be used if no Timeline or no Alpha instances are available, or
when creating new instances.
The Animation should not directly manipulate a Timeline instance,
but it should defer to the Alpha all handling of the timeline.
This means that:
- set_duration() and set_loop() will either create a Timeline or
will set the :duration and :loop properties on the Timeline; if
the Timeline must be created, and no Alpha instance is available,
then a new Alpha instance will be created as well and the newly
create Timeline will be assigned to the Alpha
- if set_mode() on an Animation instance without an Alpha, the
Alpha will be created; a Timeline will also be created
- set_alpha() will replace the Alpha; if the new Alpha does not
have a Timeline associated then a Timeline will be created using
the current :duration and :loop properties of Animation; otherwise,
if the replaced Alpha had a timeline, the timeline will be
transferred to the new one
The CoglTexture constructors expose the "max-waste" argument for
controlling the maximum amount of wasted areas for slicing or,
if set to -1, disables slicing.
Slicing is really relevant only for large images that are never
repeated, so it's a useful feature only in controlled use cases.
Specifying the amount of wasted area is, on the other hand, just
a way to mess up this feature; 99% the times, you either pull this
number out of thin air, hoping it's right, or you try to do the
right thing and you choose the wrong number anyway.
Instead, we can use the CoglTextureFlags to control whether the
texture should not be sliced (useful for Clutter-GST and for the
texture-from-pixmap actors) and provide a reasonable value for
enabling the slicing ourself. At some point, we might even
provide a way to change the default at compile time or at run time,
for particular platforms.
Since max_waste is gone, the :tile-waste property of ClutterTexture
becomes read-only, and it proxies the cogl_texture_get_max_waste()
function.
Inside Clutter, the only cases where the max_waste argument was
not set to -1 are in the Pango glyph cache (which is a POT texture
anyway) and inside the test cases where we want to force slicing;
for the latter we can create larger textures that will be bigger than
the threshold we set.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Signed-off-by: Neil Roberts <neil@linux.intel.com>
Sometimes it is necessary for third party code to have a
function called during the redraw process, so that you can
update the scenegraph before it is painted.
If we are short-circuiting the paint when the opacity is zero we still
need to clear the queued_redraw flag otherwise it won't be possible to
queue another redraw of the actor until something else has caused a
paint first.
* master:
[cogl-vertex-buffer] Ensure the clip state before rendering
[test-text-perf] Small fix-ups
Add a test for text performance
[build] Ensure that cogl-debug is disabled by default
[build] The cogl GE macro wasn't passing an int according to the format string
Use the right internal format for GL_ARB_texture_rectangle
[actor_paint] Ensure painting is a NOP for actors with opacity = 0
Make backface culling work with vertex buffers
Before any rendering is done by Cogl it needs to ensure the clip stack
is set up correctly by calling cogl_clip_ensure. This was not being
done for the Cogl vertex buffer so it would still use the clip from
the previous render.
Now that everything is float, the marsharlling function of the
size-change signal should reflect that fact.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
GLES doesn't support GL_QUADS. This patch makes it use GL_TRIANGLES
instead in that case. Unfortunately this means submitting two extra
vertices per quad. It could be better to use indexed elements once
CoglVertexBuffers gains support for that.
When ClutterGLXTexturePixmap uses GL_ARB_texture_rectangle,
it needs to pass the right internal format (GL_RGB or GL_RGBA)
when it initializes the texture with glTexImage2D() or later
handling won't recognize the alpha channel.
http://bugzilla.openedhand.com/show_bug.cgi?id=1586
Signed-off-by: Robert Bragg <robert@linux.intel.com>
Since it is convenient to use geometry with an opacity of 0 for input only
purposes it's a worthwhile optimization to avoid submitting anything
for such actors while painting.
Backface culling is enabled as part of cogl_enable so the different
rendering functions in Cogl need to explicitly opt-in to have backface
culling enabled. Cogl vertex buffers should allow backface culling so
they should check whether it is enabled and then set the appropriate
cogl_enable flag.
In order to cope with the situation where an application renders with
a PangoLayout, makes some changes and then renders again with the same
layout, CoglPangoRenderer needs to detect that the changes have
occured so that it can recreate the display list. This is acheived by
keeping a reference to the first line of the layout. When the layout
is changed Pango will clear the layout pointer in the first line and
create a new line. So if the layout pointer in the line becomes NULL
then we know the layout has changed. This trick was suggested by
Behdad Esfahbod in this email:
http://mail.gnome.org/archives/gtk-i18n-list/2009-May/msg00019.html
When a position is given to cogl_pango_render_layout_subpixel it
translates the GL matrix by the coordinates. However it was not
dividing by PANGO_SCALE so the coordinates were completely wrong.
Most of the operations involving the texture's allocated area require
floats -- either for computations or for setting the geometry into
COGL. So it doesn't make any sense to use get_allocation_coords() and
cast everything to floats.
Currently, COGL depends on defining debug symbols by manually
modifying the source code. When it's done, it will forcefully
print stuff to the console.
Since COGL has also a pretty, runtime selectable debugging API
we might as well switch everything to it.
In order for this to happen, configure needs a new:
--enable-cogl-debug
command line switch; this will enable COGL debugging, the
CoglHandle debugging and will also turn on the error checking
for each GL operation.
The default setting for the COGL debug defines is off, since
it slows down the GL operations; enabling it for a particular
debug build is trivial, though.
COGL has a debug message system like Clutter's own. In parallel,
it also uses a coupld of #defines. Spread around there are also
calls to printf() instead to the more correct g_log* wrappers.
This commit tries to unify and clean up the macros and the
debug message handling inside COGL to be more consistent.
ClutterTexture has many properties that can only be accessed using
the GObject API. This is fairly inefficient and makes binding the
class overly complicated.
The Texture class should have accessor methods for all its properties,
properly documented.
The code for the conversion of the GL error enumeration code
into a string is not following the code style and conventions
we follow in Clutter and COGL.
The GE() macro is also using fprintf(stderr) directly instead
of using g_warning() -- which is redirectable to an alternative
logging system using the g_log* API.
We use math routines inside Cogl, so it's correct to have it in
the LIBADD line. In normal usage something else was pulling in
-lm, but the introspection is relying on linking against the
convenience library.
Based on a patch by: Colin Walters <walters@verbum.org>
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The timeline created when calling set_timeline(NULL) is referenced
even though we implicitly own it. When the Animation is destroyed,
the timeline is then leaked.
Thanks to: Richard Heatley <richard.heatley@starleaf.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1548
Add a method for deleting the current selection inside a Text actor.
This is useful for subclasses.
See bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1521
Based on a patch by: Raymond Liu <raymond.liu@intel.com>
ClutterAnimation currently inherits the initial floating reference
semantics from GInitiallyUnowned. An Animation is, though, meant to
be used as a top-level object, like a Timeline or a Behaviour, and
not "owned" by another object. For this reason, the initial floating
reference does not make any sense.
Document that repeated calls to clutter_cairo_texture_create()
continue drawing on the same cairo_surface_t. Add
clutter_cairo_texture_clear() for when you don't want that behavior.
http://bugzilla.openedhand.com/show_bug.cgi?id=1599
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
The cursor x position is already translated, so we do not need to take the
actors allocation into account when calculating scrolling.
Additionally, we need to update the text_x value before running
clutter_text_ensure_cursor_position.
Add any scrolling offset to the x value when in single line mode.
Now that the offset is taken into account in the position_to_coords
function, we do not need to adjust the cursor x manually in
clutter_text_paint.
If the cursor is already at the end of the Text contents then we
need to maintain its position when deleting the previous character
using the relative key binding.
The required "fake" libclutter-cogl.la upon with the main clutter
shared object depends is only built with introspection enabled
instead of being built unconditionally.
Passing:
--library=clutter-@CLUTTER_FLAVOUR@-@CLUTTER_API_VERSION@
to g-ir-scanner, when building Cogl was causing g-ir-scanner to
link the introspection program against the installed clutter library,
if it existed or fail otherwise. Instead copy the handling from
the json/ directory where we link against the convenience library
to scan, and do the generation of the typelib later in the
main clutter/directory.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1594
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
If text is set, ClutterText should never return less than the layout
height for minimum and preferred heights.
This holds unless ellipsize and wrap are enabled, in which case the
minimum height should be the height of the first line -- which is
the height needed to at the very least show the ellipsization.
Based on a patch by: Thomas Wood <thomas@openedhand.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1598
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
This is the another step into abstracting the backend operations
that are currently spread all across the board back into the
backend implementations where they belong.
The GL context creation, for instance, is demanded to the stage
realization which makes it a critical path for every operation
that is GL-context bound. This usually does not make any difference
since we realize the default stage, but at some point we might
start looking into avoiding the default stage realization in order
to make the Clutter startup faster.
It also makes the code maintainable because every part is self
contained and can be reworked with the minimum amount of pain.
The master clock is using the redraw priority to create the source
that will be used to spin the paint sequence if something is being
animated using a timeline.
Unfortunately, the priority is too high and this causes starvation
when embedding into other toolkits -- like gtk+.
Thanks to Havoc Pennington for catching this.
The XVisualInfo for GL is created when a stage is being realized.
When embedding Clutter inside another toolkit we might not want to
realize a stage to extract the XVisualInfo, then set the stage
window using a foreign X Window -- which will cause a re-realization.
Instead, we should abstract as much as possible into the X11 backend.
Unfortunately, the XVisualInfo for GL is requested using GLX API; for
this reason we have to create a ClutterBackendX11 method that we
override inside the ClutterBackendGLX implementation.
This also allows us to move a little bit of complexity from out of
the stage realization, which is currently a very delicate and hard
to debug section.
I was seeing clutter_text_get_selection trying to malloc up to 4Gb due to
unexpected negative arithmetic for the start/end offsets which resulted
in a crash.
This just tests for positions of -1 before deciding if the start/end
positions need to be swapped. The conversion from position to byte offset
already works with -1.
cogl_clip_push_window_rect is implemented using GPU scissoring which allows
the GPU to cull anything that falls outside a given rectangle. Since in the
case of picking we only ever care about a single pixel we can get the GPU to
ignore all geometry that doesn't intersect that pixel and only rasterize for
one pixel.
Previously clipping could only be specified in object coordinates, now
rectangles can also be pushed in window coordinates.
Internally rectangles pushed this way are intersected and then clipped using
scissoring. We also transparently try to convert rectangles pushed in
object coordinates into window coordinates as we anticipate the scissoring
path will be faster then the clip planes and undoubtably it will be faster
than using the stencil buffer.
The stencil buffer is always cleared the first time a clip is used
that needs it and the stencil test is disabled otherwise so there is
no need to clear before a paint.
COGLenum, COGLint and COGLuint which were simply typedefs for GL{enum,int,uint}
have been removed from the API and replaced with specialised enum typedefs, int
and unsigned int. These were causing problems for generating bindings and also
considered poor style.
The cogl texture filter defines CGL_NEAREST and CGL_LINEAR etc are now replaced
by a namespaced typedef 'CoglTextureFilter' so they should be replaced with
COGL_TEXTURE_FILTER_NEAREST and COGL_TEXTURE_FILTER_LINEAR etc.
The shader type defines CGL_VERTEX_SHADER and CGL_FRAGMENT_SHADER are handled by
a CoglShaderType typedef and should be replaced with COGL_SHADER_TYPE_VERTEX and
COGL_SHADER_TYPE_FRAGMENT.
cogl_shader_get_parameteriv has been replaced by cogl_shader_get_type and
cogl_shader_is_compiled. More getters can be added later if desired.
Calling glReadPixels is bad enough in forcing us to synchronize the CPU with
the GPU, but glFinish has even stronger synchonization semantics than
glReadPixels which may negate some driver optimizations possible in
glReadPixels.
Commit 43fa38fcf5 broke out-of-tree builds by removing some of the
builddir directories from the include path. builddir/clutter/cogl and
builddir/clutter are needed because cogl.h and cogl-defines-gl.h are
automatically generated by the configure script. The main clutter
headers are in the srcdir so this needs to be in the path too.
When the stage state changes between active/deactive, send out
key-focus-in/key-focus-out signal for the current key focused
actor on the stage.
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1503
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Setting the stage window using the set_stage_foreign() method will
lead to a re-realization. We need to make sure that the Drawable
currently associated to the GL context is set to None, to avoid a
BadDrawable error or, if we're unlucky, a segfault in the X server.
This commit reverts part of commit 5bcde25c - specifically the
part that forced a realization of the stage if we are ensuring
the GL context with it. This makes Clutter behave like it did
prior to commit 5bcde25c: if we are asked to ensure the GL context
with an unrealized stage we simply pass NULL to the backend
implementation.
When showing a Stage for the first time we end up realizing the stage
implementation before realizing the wrapper. This leads to segmentation
faults or errors coming from the backend because we're fumbling the
state and realization sequence.
Since we are destroying any previously set VisualInfo we keep we know
for sure that stage->xvisinfo is going to be None; hence, no reason to
check this condition.
The verify_map_state() internal method is conditionally compiled
if we have CLUTTER_ENABLE_DEBUG set; for this reason, all calls to
that method should be made conditional.
When building Clutter with introspection enabled everything stops
at Cogl GIR generation because it depends on the installed library
to work. Since we still require some changes in the API to be able
to build the GIR and the typelib for Cogl we should disable the
generation of the GIR as well.
The fix for bug 1138 broke multi-stage support on GLX, causing
X11 to segfault with the following stack trace:
Backtrace:
0: /usr/X11R6/bin/X(xf86SigHandler+0x7e) [0x80c91fe]
1: [0xb7eea400]
2: /usr/lib/xorg/modules/extensions//libglx.so [0xb7ae880c]
3: /usr/lib/xorg/modules/extensions//libglx.so [0xb7aec0d6]
4: /usr/X11R6/bin/X [0x8154c24]
5: /usr/X11R6/bin/X(Dispatch+0x314) [0x808de54]
6: /usr/X11R6/bin/X(main+0x4b5) [0x8074795]
7: /lib/i686/cmov/libc.so.6(__libc_start_main+0xe5) [0xb7c75775]
8: /usr/X11R6/bin/X(FontFileCompleteXLFD+0x21d) [0x8073a81]
which I can only track down to clutter_backend_glx_ensure_current()
being passed a NULL stage -- something that happens when a stage
is not correct realized. That should lead to a glXMakeCurrent(None)
and not to a segmentation fault, though.
When destroying a top-level actor we can actually relax the verification
of the map state, since it might be fully asynchronous and we might not
re-enter inside the mainloop in time to receive the unmap notification.
If the filter means that the there should be no rows left in the model,
clutter_model_get_iter_at_row (model, 0) should return NULL.
Howevever the currene implementation misbehaves and returns a bad iterator.
This change resolves the issue by tracking if we actually found any
non-filtered rows in our pass through the sequence.
OH Bugzilla: 1591
The stage is chaining up to the ClutterGroup::paint instead of
the ClutterGroup::pick method. This works anyway because we
detect the stage by default, but it's not a reliable solution
in case we decide to change the picking further on.
The master clock is currently advanced using a frame source driven
by the default frame rate. This breaks the sync to vblank because
the vblanking rate could be different than 60 Hz -- or it might be
completely disabled (e.g. with CLUTTER_VBLANK=none).
We should be using the main loop to check if we have timelines
playing, and if so queue a redraw on the stages we own.
We should also prepare the subsequent frame at the end of the redraw
process, so if there are new redraw we will have the scene already
in place.
This makes Clutter redraw at the maximum frame rate, which is
limited by the vblanking frequency.
Currently, picking in ClutterGroup pollutes the CLUTTER_DEBUG=paint
logs since it just calls the paint function. Reimplementing the pick
doesn't make us lose anything -- it might even be slightly faster
since we don't have to do a (typed) cast and a class dereference.
The timeline created when calling set_timeline(NULL) is referenced
even though we implicitly own it. When the Animation is destroyed,
the timeline is then leaked.
Thanks to: Richard Heatley <richard.heatley@starleaf.com>
Fixes bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1548
Currently, the conversion from em to units is done by using the
default font name inside the backend. For actors using their own
font/text layout we need a way to specify the font name along
with the quantity we wish to transform.
Commit 515350a7 renamed ::focus-in and ::focus-out to ::key-focus-in
and ::key-focus-out respectively. One signal emission for ::focus-out
escaped the renaming in ClutterStage.
Currently, the default screen guard value is 0, which is a valid
screen number on X11, and it might not be the default.
Patch suggested by: Owen W. Taylor <otaylor@redhat.com>
Currently, the introspection data for Cogl is built right into
Clutter's own typelib. This makes functions like:
cogl_path_round_rectangle()
Appear as:
Clutter.cogl_path_round_rectangle()
It should be possible, instead, to have a Cogl namespace and:
Cogl.path_round_rectangle()
This means building introspection data for Cogl alone. Unfortunately,
there are three types defined in Cogl that confuse the introspection
scanner, and make it impossible to build a typelib:
COGLint
COGLuint
COGLenum
These three types should go away before 1.0, substituted by int,
unsigned int and proper enumeration types. For this reason, we can
just set up the GIR build and wait until the last moment to create
the typelib. Once that has been done, we will be able to safely
remove the Cogl API from the Clutter GIR and typelib and let
people import Cogl if they want to use the Cogl API via introspection.
For consistency, and since those signals are key-related, the
::focus-in signal is not ::key-focus-in and the ::focus-out
signal is now ::key-focus-out.
Add a method for deleting the current selection inside a Text actor.
This is useful for subclasses.
See bug:
http://bugzilla.openedhand.com/show_bug.cgi?id=1521
Based on a patch by: Raymond Liu <raymond.liu@intel.com>
ClutterAnimation currently inherits the initial floating reference
semantics from GInitiallyUnowned. An Animation is, though, meant to
be used as a top-level object, like a Timeline or a Behaviour, and
not "owned" by another object. For this reason, the initial floating
reference does not make any sense.
With the recent change to internal floating point values, ClutterUnit
has become a redundant type, defined to be a float. All integer entry
points are being internally converted to floating point values to be
passed to the GL pipeline with the least amount of conversion.
ClutterUnit is thus exposed as just a "pixel with fractionary bits",
and not -- as users might think -- as generic, resolution and device
independent units. not that it was the case, but a definitive amount
of people was convinced it did provide this "feature", and was flummoxed
about the mere existence of this type.
So, having ClutterUnit exposed in the public API doubles the entry
points and has the following disadvantages:
- we have to maintain twice the amount of entry points in ClutterActor
- we still do an integer-to-float implicit conversion
- we introduce a weird impedance between pixels and "pixels with
fractionary bits"
- language bindings will have to choose what to bind, and resort
to manually overriding the API
+ *except* for language bindings based on GObject-Introspection, as
they cannot do manual overrides, thus will replicate the entire
set of entry points
For these reason, we should coalesces every Actor entry point for
pixels and for ClutterUnit into a single entry point taking a float,
like:
void clutter_actor_set_x (ClutterActor *self,
gfloat x);
void clutter_actor_get_size (ClutterActor *self,
gfloat *width,
gfloat *height);
gfloat clutter_actor_get_height (ClutterActor *self);
etc.
The issues I have identified are:
- we'll have a two cases of compiler warnings:
- printf() format of the return values from %d to %f
- clutter_actor_get_size() taking floats instead of unsigned ints
- we'll have a problem with varargs when passing an integer instead
of a floating point value, except on 64bit platforms where the
size of a float is the same as the size of an int
To be clear: the *intent* of the API should not change -- we still use
pixels everywhere -- but:
- we remove ambiguity in the API with regard to pixels and units
- we remove entry points we get to maintain for the whole 1.0
version of the API
- we make things simpler to bind for both manual language bindings
and automatic (gobject-introspection based) ones
- we have the simplest API possible while still exposing the
capabilities of the underlying GL implementation
When calling clutter_model_iter_next () / clutter_model_iter_prev () we need
to update the row for the iterator. In order to improve the peformance of
iterating this change adds a private row mutator and switches ClutterListModel
to use it.
In order to carry out various comparisons for e.g. identifying the first
iterator in the model we need a ClutterModelIter. This change switches from
creating a new iterator each time to reusing an existing iterator.
The flags field of ClutterActor should have accessor methods for,
language bindings.
Also, the set_flags() and unset_flags() methods should actively
emit notifications for the changed properties.
This is simply a wrapper around cogl_color_set_from_4f and
cogl_material_set_color. We already had a prototype for this, it was
an oversight that it wasn't already implemented.
There were several functions I believe no one is currently using that were
only implemented in the GL backend (cogl_offscreen_blit_region and
cogl_offscreen_blit) that have simply been removed so we have a chance to
think about design later with a real use case.
There was one nonsense function (cogl_offscreen_new_multisample) that
sounded exciting but in all cases it just returned COGL_INVALID_HANDLE
(though at least for GL it checked for multisampling support first!?)
it has also been removed.
The MASK draw buffer type has been removed. If we want to expose color
masking later then I think it at least would be nicer to have the mask be a
property that can be set on any draw buffer.
The cogl_draw_buffer and cogl_{push,pop}_draw_buffer function prototypes
have been moved up into cogl.h since they are for managing global Cogl state
and not for modifying or creating the actual offscreen buffers.
This also documents the API so for example desiphering the semantics of
cogl_offscreen_new_to_texture() should be a bit easier now.
These are necessary if nesting redirections to an fbo,
otherwise there's no way to know how to restore
previous state.
glPushAttrib(GL_COLOR_BUFFER_BIT) would save draw buffer
state, but also saves a lot of other stuff, and
cogl_draw_buffer() relies on knowing about all draw
buffer state changes. So we have to implement a
draw buffer stack ourselves.
Signed-off-by: Robert Bragg <robert@linux.intel.com>
It is valid in some situations to have a material layer with an invalid texture
handle (e.g. if you setup a texture combine mode before setting the texture)
and so _cogl_material_layer_free needs to check for a valid handle before
attempting to unref it.
Adds missing notices, and ensures all the notices are consistent. The Cogl
blurb also now reads:
* Cogl
*
* An object oriented GL/GLES Abstraction/Utility Layer
Redundant clearing of depth and stencil buffers every render can be very
expensive, so cogl now gives control over which auxiliary buffers are
cleared.
Note: For now clutter continues to clear the color, depth and stencil buffer
each paint.
The clutter frame source tries to average out the frame deltas so that
if one frame takes 1 interval plus a little bit of extra time then the
next frame will be 1 interval minus that little bit of extra
time. Therefore the deltas can sometimes be less than the frame
interval. ClutterTimeline should accumulate these small differences
otherwise it will end up missing out frames so the total duration of
the timeline will be a lot longer.
For example this was causing test-actors to appear to run very slow.
The method of ClutterTimeline that advances the timeline by a
delta (in millisecond) is going to be useful for testing the
timeline's behaviour -- and unbreak the timeline test suite that
was broken by the MasterClock merge.
With the introduction of the map/unmap flags and the split of the
visible state from the mapped state we require that every part of
a scene graph branch is mapped in order to be painted. This breaks
the ability of a ClutterClone to paint an hidden source actor.
In order to fix this we need to introduce an override flag, similar
in spirit to the current modelview and paint opacity overrides that
Clone is already using.
The override flag, when set, will force a temporary map on a
Clone source (and its children).
ClutterContainer provides a foreach_with_internals() vfunc for
iterating over all of a container's children, be them added using
the Container API or be them internal to the container itself.
We should be using the foreach_with_internals() function instead
of the plain foreach().
When replacing the fbo_handle with a new handle it first unrefs the
old handle. This was previously a call to cogl_offscreen_unref which
silently ignored attempts to unref COGL_INVALID_HANDLE. However the
new cogl_handle_unref does check for this so we should make sure the
handle is valid to avoid the warning.
Bug 1565 - test-fbo case failed in many platform
clutter_texture_new_from_actor was broken because it created the FBO
texture but then never attached it to the material so it was never
used for rendering. The old behaviour in Clutter 0.8 was to assign the
texture directly to priv->tex. In 0.9 priv->tex is replaced with
priv->material which has a reference to the tex in layer 0. So putting
the FBO texture directly in layer 0 more closely matches the original
behaviour.
When rendering a pango layout CoglPangoRenderer now records the
operations into a list called a CoglPangoDisplayList. The entries in
the list are either glyph renderings from a texture, rectangles or
trapezoids. Multiple consecutive glyph renderings from the same glyph
cache texture are combined into a single entry. Note the
CoglPangoDisplayList has nothing to do with a GL display list.
After the display list is built it is attached to the PangoLayout with
g_object_set_qdata so that next time the layout is rendered it can
bypass rebuilding it.
The glyph rendering entries are drawn via a VBO. The VBO is attached
to the display list so it can be used multiple times. This makes the
common case of rendering a PangoLayout contained in a single texture
subsequent times usually boil down to a single call to glDrawArrays
with an already uploaded VBO.
The VBOs are accessed via the cogl_vertex_buffer API so if VBOs are
not available in GL it will resort to a fallback.
Note this will fall apart if the pango layout is altered after the
first render. I can't find a way to detect when the layout is
altered. However this won't affect ClutterText because it creates a
new PangoLayout whenever any properties are changed.
Currently ClutterModel::get_iter_at_row() ignores whether we have
a filter in place. This also extends to the get_n_rows() method.
The more consistent, more intuitive and surely more correct way to
handle a Model with a filter in place is to take into account the
presence of the filter itself -- that is:
- get_n_rows() should take into account the filter and return the
number of *filtered* rows
- get_iter_at_row() should also take the filter into account and
get the first non-filtered row
These two changes make the ClutterModel with a filter function
behave like a subset of the original Model without a filter in
place.
For instance, given a model with three rows:
- [row 0] <does not match filter>
- [row 1] <matches filter>
- [row 2] <matches filter>
- [row 3] <does not match filter>
The get_n_rows() method will return "2", since only two rows will
match the filter; the get_first_iter() method will ask for the
zero-eth row, which will return an iterator pointing to the contents
of row 1 (but the :row property of the iterator will be set to 0);
the get_last_iter() method will ask for the last row, which will
return an iterator pointing to the contents of row 2 (but the :row
property of the iterator will be set to 1).
This changes will hopefully make the Model API more consistent
in its usage whether there is a filter in place or not.
Currently, there is no way for implementations of the ClutterModel
abstract class to know whether there is a filter in place. Since
subclasses might implement some optimization in case there is no
filter present, we need a simple (and public) API to ask the model
itself.
Setting the wrap mode on the PangoLayout seems to have disappeared
during the text-actor-layout-height branch merge so this brings it
back. The test for this in test-text-cache no longer needs to be
disabled.
We also shouldn't set the width on the layout if there is no wrapping
or ellipsizing because otherwise it implicitly enables wrapping. This
only matters if the actor gets allocated smaller than its natural
size.
Bug 1484 - Redraw ClutterClone when the source changes, even for
!visible sources
Connect to ::queue-redraw on the clone source and queue a redraw.
This allows redrawing the Clone when the source changes, even in
case of a non visible source actor.
http://bugzilla.openedhand.com/show_bug.cgi?id=1484
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Currently, all timelines install a timeout inside the TimeoutPool
they share. Every time the main loop spins, all the timeouts are
updated. This, in turn, will usually lead to redraws being queued
on the stages.
This behaviour leads to the potential starvation of timelines and
to excessive redraws.
One lesson learned from the games developers is that the scenegraph
should be prepared in its entirety before the GL paint sequence is
initiated. This means making sure that every ::new-frame signal
handler is called before clutter_redraw() is invoked.
In order to do so a TimeoutPool is not enough: we need a master
clock. The clock will be responsible for advancing all the active
timelines created inside a scene, but only when the stage is
being redrawn.
The sequence is:
+ queue_redraw() is invoked on an actor and bubbles up
to the stage
+ if no redraw() has already been scheduled, install an
idle handler with a known priority
+ inside the idle handler:
- advance the master clock, which will in turn advance
every playing timeline by the amount of milliseconds
elapsed since the last redraw; this will make every
playing timeline emit the ::new-frame signal
- queue a relayout
- call the redraw() method of the backend
This way we trade multiple timeouts with a single frame source
that only runs if a timeline is playing and queues redraws on
the various stages.
Bug 1547 - when an actor is unmapped while owning the focus, it should
release it
When an actor is unmapped while owning the focus, the should release it.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1138 - No trackable "mapped" state
* Add a VISIBLE flag tracking application programmer's
expected showing-state for the actor, allowing us to
always ensure we keep what the app wants while tracking
internal implementation state separately.
* Make MAPPED reflect whether the actor will be painted;
add notification on a ClutterActor::mapped property.
Keep MAPPED state updated as the actor is shown,
ancestors are shown, actor is reparented, etc.
* Require a stage and realized parents to realize; this means
at realization time the correct window system and GL resources
are known. But unparented actors can no longer be realized.
* Allow children to be unrealized even if parent is realized.
Otherwise in effect either all actors or no actors are realized,
i.e. it becomes a stage-global flag.
* Allow clutter_actor_realize() to "fail" if not inside a toplevel
* Rework clutter_actor_unrealize() so internally we have
a flavor that does not mess with visibility flag
* Add _clutter_actor_rerealize() to encapsulate a somewhat
tricky operation we were doing in a couple of places
* Do not realize/unrealize children in ClutterGroup,
ClutterActor already does it
* Do not realize impl by hand in clutter_stage_show(),
since showing impl already does that
* Do not unrealize in various dispose() methods, since
ClutterActor dispose implementation already does it
and chaining up is mandatory
* ClutterTexture uses COGL while unrealizable (before it's
added to a stage). Previously this breakage was affecting
ClutterActor because we had to allow realize outside
a stage. Move the breakage to ClutterTexture, by making
ClutterTexture just use COGL while not realized.
* Unrealize before we set parent to NULL in clutter_actor_unparent().
This means unrealize() implementations can get to the stage.
Because actors need the stage in order to detach from stage.
* Update clutter-actor-invariants.txt to reflect latest changes
* Remove explicit hide/unrealize from ClutterActor::dispose since
unparent already forces those
Instead just assert that unparent() occurred and did the right thing.
* Check whether parent implements unrealize before chaining up
Needed because ClutterGroup no longer has to implement unrealize.
* Perform unrealize in the default handler for the signal.
This allows non-containers that have children to work properly,
and allows containers to override how it's done.
* Add map/unmap virtual methods and set MAPPED flag on self and
children in there. This allows subclasses to hook map/unmap.
These are not signals, because notify::mapped is better for
anything it's legitimate for a non-subclass to do.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1228 - Unnecessary glColorMask on alpha drops performance
With DRI2, alpha is allowed in the window's framebuffer
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1513 - Allow passing in ClutterPickMode to
clutter_stage_get_actor_at_pos()
At the moment, clutter_stage_get_actor_at_pos() uses CLUTTER_PICK_ALL
internally to find an actor. It would be useful to allow passing in
ClutterPickMode to clutter_stage_get_actor_at_pos(), so that the caller
can specify CLUTTER_PICK_REACTIVE as a criteria.
Bug 1516 - Does not withdraw toplevels correctly
clutter_stage_x11_hide() needs to use XWithdrawWindow(), not
XUnmapWindow().
As it stands now, if the window is already unmapped (say the WM has
minimized it), then the WM will not know that Clutter has closed the
window and will keep the window managed, showing it in the task list
and so forth.
Bug 1517 - clutter_container_foreach_with_internals()
This allows us to iterate over all children (for things like maintaining
map/realize invariants) or only children that apps added and care about.
Signed-off-by: Emmanuele Bassi <ebassi@linux.intel.com>
Bug 1561 - Bad code in clutter-alpha.c
The implementation of the easing modes equations followed closely
the JavaScript and ActionScript counterparts. Obviously, JS and AS
are not C-compatible, so later versions of gcc (4.4.0 for instance)
would complain about uninitialized variables and such. The code is
also obfuscated and hard to debug/understand.
For these reasons, the implementation should be unobfuscated and
sanitized.
A lockup was reported by fargoilas on #clutter and removing the server grab in
clutter_x11_texture_pixmap_sync_window fixed the problem.
We were doing an x server grab to guarantee that if the XGetWindowAttributes
request reported that our redirected window was viewable then we would also
be able to get a valid pixmap name. (the composite protocol says it's an
error to request a name for a window if it's not viewable) Without the grab
there would be a race condition.
Instead we now handle error conditions gracefully.
The documentation for the ::cursor-event says that the signal
is emitted only when the cursor position changes. Right now it
is being emitted every time we ensure the cursor position during
the Text paint sequence.
The clutter_text_ensure_cursor_position() function should check
whether the stored cursor position has changed since the last
paint, and emit the ::cursor-event signal only when there has
been a change.
We only want to eschew the pango_layout_set_width() on editable,
single-line Text actors because they can "scroll" the PangoLayout.
This fixes the ellipsization on entries.
In unifying the {gl,gles}/cogl.c code recently, moving most of the code into
common/cogl.c the gmodule.h include was also mistakenly moved.
Thanks to Felix Rabe for reporting this issue.
Note: I haven't tested this fix myself, as I'm not set up to be able to
build for OS X
Buffer objects aren't currently available for glx indirect contexts, so we
now have a fallback that simply allocates fake client side vbos to store the
attributes.
This makes the #if 0'd debug code that was in _cogl_journal_flush_quad_batch
- which we have repeatedly found usefull for debugging various geometry
issues in Clutter apps - a runtime debug option.
The outline colors rotate in order from red to green to blue which can also
help confirm the order that your geometry really drawn.
The outlines are not affected by the current material state, so if you e.g.
have a blending bug where geometry mysteriously disappears this can confirm
if the underlying rectangles are actually being emitted but blending is
causing them to be invisible.
Editable ClutterText will scroll when allocated less width than is
necessary to lay out the entire layout on a single line, so return 1 for
the minimum width in this case.
Approved by Emmanuele Bassi, fixes bug #1555.
Since we have to do (z_far - z_near) and use it in a division we
should check that the user is not passing a value that would
cause a division by zero.
If we need to check that the layout sequence is correct in
terms of order of execution and with respect to caching, then
having a CLUTTER_DEBUG_LAYOUT debug flag would make things
easier.
Ellipsizing was effectively broken for two reasons. There was a typo
in the code to set the width so it always ended up being some massive
value. If no height should be set on the layout it was being set to
G_MAXINT. Setting a height greater than 0 enables wrapping which so
ellipsizing is not performed. It should be left at the default of -1
instead.
Bug 1476 - JSON Parser memory leak
Static analysis of the code showed that the in-tree copy of
the JsonParser object leaks objects and arrays on parse errors.
Thanks to Gordon Williams <gordon.williams@collabora.co.uk>
If the stage is unrealized (such as will be the case if the stage was
created with clutter_stage_new) then it would set the size of the
stage display but it was not setting the fullscreen_on_map flag so it
never got the _NET_WM_STATE_FULLSCREEN property. Now it always sets
the flag regardless of whether the window is created yet.
There was also problems with dual-headed displays because in that case
DisplayWidth/Height will return the size of the combined display but
Metacity (and presumably other WMs) will sensibly try fit the window
to only one of the monitors. However we were setting the size hints so
that the minimum size is that of the combined display. Metacity tries
to honour this by setting the minimum size but then it no longer
positions the window at the top left of the screen.
The patch makes it avoid setting the minimum size when the stage is
fullscreen by checking the fullscreen_on_map flag. This also means we
can remove the static was_resizable flag which would presumably have
caused problems for multi-stage.
Adds a new property so that the selection color can be different from
the cursor color. If no selection color is specified it will use the
cursor color as before. If no cursor color is specified either it will
use the text color.
The debug macros for tracking reference counting of CoglHandles had
some typos introduced in c3d9f0 which meant it failed to compile when
COGL_DEBUG is 1.
Since the Cogl material branch merge when changing the color of a part
using pango attributes (such as using <span color="red" /> markup)
then it wouldn't return to the default color for the rest of the
layout. pango_renderer_get_color returns NULL if there is no color
override in which case it needs to revert to the color specified as
the argument to cogl_pango_render_layout. The 'color' member of
CoglPangoRenderer has been reinstated to store that default color and
now cogl_pango_render_set_color_for_part is the only place that sets
the material color.
The clutter_actor_animate*() family of functions should only connect
to the Animation::completed signal once, during the construction of
the Animation object attached to the Actor. Otherwise, the completed
signal handler will be run multiple times, and will try to unref()
the Animation for each call -- leading to a segmentation fault.
The Animation class is missing a ::started signal matching the
::completed one. A ::started signal is useful for debugging,
initial state set up, and checks.
Bug 1535 - Complete animation always unrefs ClutterAnimation (even
after g_object_ref_sink)
Animations created through clutter_animation_new() should not
automagically unref themselves by default on ::complete. We
only want that behaviour for Animations created by the
clutter_actor_animate* family of functions, since those provide
the automagic memory management.
ClutterGroup still ships with API deprecated since 0.4. We did
promise to keep it around for a minor release cycle -- not for 3.
Since we plan on shipping 1.0 without the extra baggage of the
deprecated entry points, here's the chance to remove the accumulated
cruft.
All the removed methods and signals have a ClutterContainer
counterpart.
Since we're planning to release 1.0 without any of the deprecated
API baggage, we can simply remove the set_uniform_1f() method from
ClutterShader public API and add it to the deprecated header.
The cogl_is_* functions were showing up quite high on profiles due to
iterating through arrays of cogl handles.
This does away with all the handle arrays and implements a simple struct
inheritance scheme. All cogl objects now add a CoglHandleObject _parent;
member to their main structures. The base object includes 2 members a.t.m; a
ref_count, and a klass pointer. The klass in turn gives you a type and
virtual function for freeing objects of that type.
Each handle type has a _cogl_##handle_type##_get_type () function
automatically defined which returns a GQuark of the handle type, so now
implementing the cogl_is_* funcs is just a case of comparing with
obj->klass->type.
Another outcome of the re-work is that cogl_handle_{ref,unref} are also much
more efficient, and no longer need extending for each handle type added to
cogl. The cogl_##handle_type##_{ref,unref} functions are now deprecated and
are no longer used internally to Clutter or Cogl. Potentially we can remove
them completely before 1.0.
A layer object may be instantiated when setting a combine mode, but before a
texture is associated. (e.g. this is done by the pango renderer) if this is the
case we shouldn't call cogl_texture_get_format() with an invalid cogl handle.
This patch skips over layers without a texture handle when determining if any
textures have an alpha channel.