Intended to be used to pass state from screen cast clients down the
line. The first use case will be a boolean whether a screen cast is a
plain recording or not, e.g. letting the Shell decide whether to use a
red dot as the icon, or the generic "sharing" symbol.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1377
The clutter_actor_get_transformed_position returns the position of the
top left point of the actor, with the actor transformations. That means
that if the actor is rotated 180º it'll return the "screen" position top
right.
Using this to calculate if the actor is in the screen is causing
problems when it's transformted.
This patch adds a new function clutter_actor_get_transformed_extents,
that will return the transformed actor bounding rect.
This new function is used on the update_stage_views so the actor will
get updated. this way rotated actors will be updated if they are on the
screen.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1386
GLib will now be linking against sysprof-capture-4.a. To support that,
sysprof had to remove the GLib dependency from sysprof-capture-4 which
had the side-effect of breaking ABi.
This bumps the dependency and includes a fallback to compile just the
libsysprof-capture-4.a using a subproject wrap.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1352
Commit 510cbef15a changed the logic in `handle_update()` for X11 window
actors to return early if the surface is not an X11 surface.
That works fine for plain Xorg, but on Xwayland, the surface is actually
a Wayland surface, therefore the function returns early before updating
the drop shadows of server-side decorations for X11 windows.
Change the test logic to restore drops shadows with Xwayland windows.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1384
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/1358
The memory selection source was only providing the "text/plain" or the
"text/plain;charset=utf-8" mimetype, but not "STRING" or "UTF8_STRING",
which some X11 clients, like wine, are looking for. This was breaking
pasting from the clipboard in wine applications.
Fix this by adding those targets when they are missing and the selection
source provides the corresponding mimetypes.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1369
Wine destroys its old selection window immediately before creating a new
selection. This would trigger restoring the clipboard, which would
overwrite the new selection with the old one. The selection window
however can also be destroyed as part of the shutdown process of
applications, such as Chromium for example. In those cases we want the
clipboard to be restored after the selection window has been destroyed.
Solve this by not immediately restoring the clipboard but instead using
a timeout which can be canceled by any new selection owner, such as in
the Wine case.
Fixes https://gitlab.gnome.org/GNOME/mutter/-/issues/1338https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1369
The new "id" properties for the MetaCrtc* and MetaOuput* objects are 64-bit
values, so take care to pass 64-bit values when calling g_object_new.
Fixes https://gitlab.gnome.org/GNOME/mutter/-/issues/1343.
When using its EGLStream-based presentation path with the proprietary NVIDIA
driver, mutter will use a different function to process page flips -
custom_egl_stream_page_flip. If that fails due to an EBUSY error, it will
attempt to retry the flip. However, when retrying, it unconditionally uses the
libdrm-based path. In practice, this causes a segfault when attempting to
access plane_assignments->fb_id, since plane_assignments will be NULL in the
EGLStream case. The issue can be reproduced reliably by VT-switching away from
GNOME and back again while an EGL application is running.
This patch has mutter also use the custom page flip function when retrying the
failed flip.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1375
Instead of blindly hoping that `$INCLUDE` contains the parent directory
of `gsettings-desktop-schemas`.
Because `gsettings-desktop-schemas.pc` says:
```
Cflags: -I/SOME/DIRECTORY/gsettings-desktop-schemas
```
Which means to include the version that Meson has configured you need
to drop the directory prefix and only `#include <gdesktop-enums.h>`.
This fixes a build failure with local installs triggered by 775ec67a44
but it's also the right thing to do™.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1370
During animation or other things that cause multiple frames in a row
being painted, we might skip recording frames if the max framerate is
reached.
Doing so means we might end up skipping the last frame in a series,
ending with the last frame we sent was not the last one, making things
appear to get stuck sometimes.
Handle this by creating a timeout if we ever throttle, and at the time
the timeout callback is triggered, make sure we eventually send an up to
date frame.
This is handle differently depending on the source type. A monitor
source type reports 1x1 pixel damage on each view its monitor overlaps,
while a window source type simply records a frame from the surface
directly, except without recording a timestamp, so that timestamps
always refer to when damage actually happened.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1361
Now that we don't use the record function to early out depending on
implicit state (don't record pixels if only cursor moved for example),
let it simply report an error when it fails, as we should no longer ever
return without pixels if nothing failed.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1361
Both do more or less the same but with different methods - one puts
pixels into a buffer using the CPU, the other puts pixels into a buffer
using the GPU.
However, they are behaving slightly different, which they shouldn't.
Lets first address the misleading disconnect in naming, and later we'll
make them behave more similarly.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1361
That was obviously always the intention, but it didn't work when the
display was scaled. My 3840x2160 monitor with a 3840x2160 texture was
being rendered with LINEAR filtering.
It seems the `force_bilinear` flag was TRUE when it should be FALSE.
Because a texture area that's an integer fraction of the texture
resolution is still a perfect match when that integer is the monitor
scale. We were also getting:
`meta_actor_painting_untransformed (fb, W, H, W, H, NULL, NULL) == FALSE`
when the display was scaled. Because the second W,H was not the real
sampling resolution. So with both of those issues fixed we now get
NEAREST filtering when the texture resolution matches the resolution it's
physically being rendered at.
Note: The background texture actually wasn't equal to the physical monitor
resolution prior to January 2020 (76240e24f7). So it wasn't possible to do
this before then. Since then however, the texture resolution is always
equal to the physical monitor resolution.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1346
Make clutter_actor_allocate_preferred_size() convenient to use from
layout managers by not "automatically" honouring the fixed position of
the actor, but instead allowing to pass a position to allocate the
actor at.
This way we can move the handling of fixed positions to
ClutterFixedLayout, the layout manager which is responsible for
allocating actors using fixed positions.
This also makes clutter_actor_allocate_preferred_size() more similar to
clutter_actor_allocate_available_size().
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1310
It's currently a bit hard to get the fixed position of an actor. It can
be either done by using g_object_get() with the "fixed-x"/"fixed-y"
properties or by calling clutter_actor_get_position().
Calling clutter_actor_get_position() can return the fixed position, but
it might also return the allocated position if the allocation is valid.
The latter is not the best behavior when querying the fixed position
during an allocation, so introduce a new function
clutter_actor_get_fixed_position() which always gets the fixed position
and returns FALSE in case no fixed position is set.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1310
It doesn't take all children - subsurfaces in this case - into
account, thus creating glitches if subsurfaces extend outside
of the toplevel surface.
Further more it doesn't seem to serve any special purpose - it was
added in f7315c9a36, a pretty big commit, and no discussion was
started about the code in question. So it was likely just overlooked
in the review process.
Closes https://gitlab.gnome.org/GNOME/mutter/-/issues/873
Closes https://gitlab.gnome.org/GNOME/mutter/-/issues/1316
gnome-shell displays workspace previews at one tenth scale. That's a
few binary orders of magnitude so even using a LINEAR filter was
resulting in visible jaggies. Now we apply mipmapping so they appear
smooth.
As an added bonus, the mipmaps used occupy roughly 1% the memory of
the original image (0.1 x 0.1 = 0.01) so they actually fit into GPU/CPU
caches now and rendering performance is improved. There's no need to
traverse the original texture which at 4K resolution occupies 33MB,
only a 331KB mipmap.
In my case this reduces the render time for the overview by ~10%.
Closes: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/1416https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1347
In the case of indirect rendering like the first frame to use mutter's
background wallpaper:
Texture_A -> FBO_B (Texture_B) -> FBO_C (screen)
we would be trying to render the contents of both FBO_B and FBO_C in
the same flush, before the contents of Texture_A had made it to FBO_B.
So when FBO_C wants to use mipmaps of Texture_B they didn't exist yet
and appeared all black. And the blackness would remain for subsequent
frames as cogl has now decided the mipmaps of FBO_B are no longer
"dirty" and don't need refreshing:
FBO_B (Texture_B) (mipmaps_dirty==FALSE but black) -> FBO_C (screen)
We must flush FBO_B before referencing Texture_B for use in rendering
FBO_C. This only happens when Texture_A changes (e.g. when the user
changes their background wallpaper) so there's no ongoing performance
penalty from this flush.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1347
With the introduction of the shallow relayout mechanism another small
but severe regression sneaked into our layout machinery: We might
allocate an actor twice during the same allocation cycle, with one
allocation happening using the wrong parent.
This issue happens when reparenting an actor from a NO_LAYOUT parent to
a non-NO_LAYOUT parent, in particular it triggered a bug in gnome-shell
when DND reparents a child from the NO_LAYOUT uiGroup to the overviews
Workspace actor after a drag ended. The reason the issue happens is the
following chain of events:
1. child of a NO_LAYOUT parent queues a relayout, this child is added to
the priv->pending_relayouts list maintained by ClutterStage
2. child is reparented to a different parent which doesn't have the
NO_LAYOUT flag set, another relayout is queued, this time a different
actor is added to the priv->pending_relayouts list
3. the relayout happens and we go through the pending_relayouts list
backwards, that means the correct relayout queued during 2. happens
first, then the old one happens and we simply call
clutter_actor_allocate_preferred_size() on the actor, that allocation
overrides the other, correct one.
So fix that issue by adding a method to ClutterStage which removes
actors from the pending_relayouts list again and call this method as
soon as an actor with a NO_LAYOUT parent is detached from the stage.
With that in place, we can also remove the check whether an actor is
still on stage while looping through pending_relayouts. In case
something else is going wrong and the actor is not on stage,
clutter_actor_allocate() will warn anyway.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1356
When picking which frame clock to use, we traverse up in the actor
hierarchy until a suitable frame clock is found. ClutterTimeline
also listens to the 'stage-views-changed' to make sure it's always
attached to the correct frame clock.
However, there is one special situation where neither of them would
work: when the stage doesn't have a frame clock yet, and the actor
of the timeline is outside any stage view. When that happens, the
returned frame clock is NULL, and 'stage-views-changed' is never
emitted by the actor.
Monitor the stage for stage view changes when the frame clock is
NULL.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
An actor may be placed without being on any current stage view; in this
case, to get the ball rolling, walk up the actor tree to find the first
actor where a frame clock can be picked from.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
The frame clock owner should be able to explicitly destroy (i.e. make
defunct) a frame clock, e.g. when a stage view is destructed. This is so
that other objects can keep reference to its without it being left
around even after stopped being usable.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
Currently there is a point in between hot plug, and when the stage view
list is up to date. The check also tests for this behaviour; would this
ever change, the test should be adapted to deal with this too.
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285