An input only grab is a ClutterGrab on the stage that doesn't have an
explicit actor associated with it. This is useful for cases where event
should be captured as if focus was stolen to some mysterious place that
doesn't have anything in the scene graph that represents it.
Internally, it's implemented using a 0x0 sized actor attached directly
to the stage, and a clutter action that consumes the events. An
input-only grab takes a handler, user data and a destroy function for
the user data. These are handed to the ClutterAction, which handles the
actual event handling.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2628>
Adding a barrier and later enabling the input capture session will
create MetaBarrier instances for each added input capture barrier.
The barriers are created as "sticky" which means that when a pointer
hits the barrier, it'll stick to the point of entry, until it's
released.
The input capture session is also turned into a state machine with
explicit state, to more easily track things.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2628>
This allows for a sticky barrier to hold the pointer until it is
released, but the owner of the barrier doesn't need a barrier event to
release it. It will be used to implement input capturing.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2628>
A sticky barrier means that a pointer in motion intersecting a barrier
doesn't move once having hit it. The intention with this is to allow an
input capture clients to continue a motion once a barrier is hit.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2628>
This API aims to provide a way for users to capture input devices under
certain conditions, for example when a pointer crosses a specified
barrier.
So far only part of the API is implemented, specifially the session
management as well as zone advertisement, where a zone refers to a
region in the compositor which edges will eventually be made available
for barrier placement.
So far the remote access handle is created while the session is enable,
despite the input capturing isn't actually active yet. This will change
in the future once it can actually become active.
v2: Remove absolute/relative pointer, keep only pointer (ofourdan)
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2628>
A 2D actorless paint volume can't ever need `enlarge_for_effects` because
it has no depth. Clamping to the pixel boundary is sufficient in this case
and avoids extending volumes on the edge of the view into the next view.
Which then avoids unnecessary secondary monitor updates.
Paint volumes correctly become actorless where `clutter_actor_finish_layout`
calls `_clutter_paint_volume_transform_relative`.
Relates to: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/6819
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3112>
This was a temporary fix until there was a better crossing event
delivery mechanism that accounted for actor changes beneath the pointer.
We nowadays have that, and don't seem to need this extra kick to get
crossing events triggered (and cursor changes, etc) when windows appear
or disappear under the pointer.
This commit is effectively a revert of commit
a64dba4d7a.
Closes: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/6808
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3104>
With window_is_terminal gone, "strict" and "smart" focus mode have no
behavioural difference. Let's broaden the scope of strict focus mode,
such that windows never automatically focus unless they are an ancestor
to the transient.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3063>
As noted in the comments of window_is_terminal, this is a hack. This
code has not been touched for the better part of a decade. App res_class
tends to differ between Wayland and X11, so it is likely that none of
these apps have been recognised as terminals under Wayland ever. Also,
there are reports that strict focus mode also does not work under X11,
likely due to changes in these terminal apps over the years resulting
in different res_class than those manually specified in here. Let's remove
this hack and change strict focus mode accordingly.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3063>
Instead of using `clutter_actor_get_resource_scale()`, we now deduce the
intended buffer scale from the window by dividing the unscaled size by
the final actor size. This is more correct as while the return value of
`clutter_actor_get_resource_scale()` depends only on the monitor where
the surface resides, the actual scale of the surface is determined
solely by the application itself. `get_resource_scale` will differ from
the actual buffer scale if the application only supports 100% scaling
(Xwayland), or is performing scaling with wp_viewporter (clients using
fractional_scale_v1).
This also fixes a mismatch between the calculated buffer sizes between
`meta_window_actor_get_buffer_bounds` and
`meta_window_actor_blit_to_framebuffer` which causes broken
screencasting for Chromium 114 and later when using the native Ozone
Wayland backend.
Additionally, this commit also changes
`meta_window_actor_blit_to_framebuffer` from using a simple translation
to using an inverted matrix transformation of the transformation matrix
between the parent of the window actor and the surface actor to ensure
maximum sharpness for fractionally scaled windows.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3053>
Previously, restarting mutter in an X11 session resulted in
the previously set color temperature not being applied.
Fix that by applying the color temperature right after
the org.gnome.SettingsDaemon.Color proxy has been created.
Furthermore, only call `update_all_gamma()` from `on_gsd_color_ready()`
when the temperature has actually changed. Otherwise there is no need
since the current temperature has already been (or will soon be) applied
to all ready color devices.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3008>
We'd get a re-entry like scenario when destroying the PipeWire stream
object, where PipeWire would call the stream process vfunc. When this
happened, we had already destroyed the stream, so don't try to dequeue
or anything, just do an early exit. Fixes the following crash in the
test case client:
#0 pw_stream_dequeue_buffer() in /usr/lib64/libpipewire-0.3.so.0.367.0
#1 on_stream_process() at ../src/tests/screen-cast-client.c:348
#2 do_call_process() in /usr/lib64/libpipewire-0.3.so.0.367.0
#3 flush_items() in /usr/lib64/spa-0.2/support/libspa-support.so
#4 loop_invoke() in /usr/lib64/spa-0.2/support/libspa-support.so
#5 impl_send_command.lto_priv.0() in /usr/lib64/libpipewire-0.3.so.0.367.0
#6 suspend_node.lto_priv.0() in /usr/lib64/libpipewire-0.3.so.0.367.0
#7 pw_impl_node_set_state() in /usr/lib64/libpipewire-0.3.so.0.367.0
#8 client_node_removed() in /usr/lib64/pipewire-0.3/libpipewire-module-client-node.so
#9 pw_proxy_destroy() in /usr/lib64/libpipewire-0.3.so.0.367.0
#10 pw_stream_disconnect() in /usr/lib64/libpipewire-0.3.so.0.367.0
#11 pw_stream_destroy() in /usr/lib64/libpipewire-0.3.so.0.367.0
#12 stream_free() at ../src/tests/screen-cast-client.c:530
#13 main() at ../src/tests/screen-cast-client.c:803
#14 __libc_start_call_main() at ../sysdeps/nptl/libc_start_call_main.h:58
#15 __libc_start_main() at ../csu/libc-start.c:360
#16 _start() in /home/jonas/Dev/gnome/mutter/build/src/tests/mutter-screen-cast-client
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3095>
Setting up the image with a custom default user broke gnome-shell's
toolbox images. While running tests as non-root user seems like a
good idea, keeping people's development environment working should
be figured out first.
This partially reverts commit 69cc65d15f.
Keep the image to have a local user and use it to run tests so that
we can ensure that permissions are respected
Co-authored-by: Florian Müllner <fmuellner@gnome.org>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3083>
If an actor's expand properties haven't been set explicitly, its
expand flags are computed by traversing its children.
However we currently also traverse into children when explicitly
setting "expand" to FALSE, because that is the default value and
the properties are only marked as explicitly-set when the value
actually changed.
Fix this, so propagating expand flags can be stopped without
hacks like
```c
g_object_set (actor, "x-expand", TRUE, NULL);
g_object_set (actor, "x-expand", FALSE, NULL);
``
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3088>
If the timelines don't get destroyed they keep references to frame
clocks. Later tests check for the destruction of those frame clocks and
then can fail if the frame clock is implemented slightly differenty.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3084>
In remote desktop sessions, streams can be created and destroyed
on-the-fly.
If a stream is gone, it is not necessarily an error.
So, don't treat that situation like an erroneous one.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2911>
As it is the only place where cogl depends directly on cairo minus
the whole cairo_region_t.
The motivation behind the removal of this helper is to reduce the usage
of cairo in libmutter is to potentially completely drop it in
certain places or replace it with pixman.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3079>
The previous logic didn't work correctly at least when priority-based
preeption wasn't supported by the DRM driver, such as in the case
of amdgpu. The call to glGetQueryObjecti64v would block on client
work which is already in progress (most likely for the next frame)
and delay notifying the ClutterFrameClock about presentation.
Conveniently, the Wayland transactions mechanism guarantees that all
fences of a dma-buf buffer are signalled before the buffer is
included in a frame, which means that dma-buf buffers are ready for
presentation when being directly scanned-out.
Direct scanout is only supported for dma-buf buffers too, which means
that all buffers going through direct scanout are effectively ready
and require no GPU rendering before presentation.
Assuming zero rendering time for dma-buf buffers going through direct
scanout simplifies the code and removes the need for
glGetQueryObjecti64v, thus avoiding the aforementioned issue where it
could block for longer than expected.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/2766
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3080>
This is expected for the common case of direct scanout of Wayland
buffers where transactions guarantee that all buffer fences are
signalled before a buffer is included in a frame.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3080>
Dispatch jitter is how much the dispatch interval has changed between
frames. It's a measure of sampling smoothness for events that are occurring
at a higher rate than the screen is refreshing:
* Mouse movement
* Clients rendering at swap interval zero
* Keyframe animation position
Zero jitter is ideal but will practically never happen, and a jitter value
of several thousand microseconds will be visible to the naked eye as stutter
even if you're maintaining a perfect frame rate.
To make the numbers easier to interpret we also log the jitter as a
percentage of the refresh interval.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3082>
This could happen when moving the cursor over GUIs that only redraw
in response to cursor movement. Mutter would experience alternating
cursor-only updates and page flips, and so the `max_render_time_allowed_us`
would jump between pessimised and optimised resulting in inconsistent
frame pacing.
Aside from fixing the smoothness problem this should also provide
lower latency cursor movement.
Fixes: https://launchpad.net/bugs/2023766
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3074>
Depending on the ordering of the surface-associated resources
being destroyed, we may fall into the following situation:
- wl_surface is destroyed
- destruction notifications for the surface runs
- The MetaWaylandKeyboard attempts to synchronize the window
focus
- The MetaWindow is not destroyed yet, so the focused window
remains the same, and the MetaWaylandKeyboard keeps the same
focus MetaWaylandSurface.
- wl_surface finalizes destruction, MetaWaylandSurface now has
a NULL resource
- xdg_toplevel destructor kicks in, it unmanages the window
- The current focus window is again looked up, forced to look
a different window
- The MetaWaylandKeyboard focus now changes, tries to leave the
old surface, but it has a NULL resource already, and raises
a protocol error.
If the order is inverted, the window being unmanaged triggers a
focus change into a different window, the MetaWaylandKeyboard
triggers a focus change while the MetaWaylandSurface is still
intact, it succeeds, and the window gets properly destroyed.
In order to make this independent of the order, it makes sense
to make MetaWaylandKeyboard do like the other objects tracking
focus surfaces, and have it care of its own little parcel. The
surface destructor changed to simply unsetting the keyboard focus
to NULL (guaranteeing that the old focus is left while the surface
resource is still up), and leaving potential focus changes to
the xdg_toplevel_destructor->unmanage->update_focus paths.
Doing that alone is basically a revert of commit 228d681b, thus
is still subject to keyboard focus being lost after a popup is
destroyed. Change the approach to trigger the focus sync (and
new focus surface lookup) so it happens from xdg_popup_destructor
specifically to popups and alike xdg_toplevel.
Fixes: 228d681b ("wayland: Trigger full focus sync after keyboard focus surface is destroyed")
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/2853
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3077>