Grab operations are now always taken on the backend connection, and
this breaks GTK+'s event handling.
Instead of taking a grab op, just do the handling ourselves. The
GTK+ connection will get an implicit grab, which means pointer /
keyboard events won't be sent to the rest of mutter, which is good.
Now that we grab devices on the X11 connection, we can run into
cross-connection issues. Since GTK+ frames are on the UI connection,
they'll get the passive grab when we click on them. Forcibly ungrab
on GTK+'s connection before attempting to take a grab on the backend
connection ourselves.
It's been long enough. We can mandate support for these, at least
at build-time. The code doesn't actually compile without either
of these, so just consider that unsupported.
Looking at the code paths where is_mouse / is_keyboard are used,
all of them should never be run when dealing with a COMPOSITOR
grab op, since they're filtered out above or the method is just
never run during that time.
It's confusing that COMPOSITOR is in here, and requires us to
be funny with other places in code, so just take it out.
The idea here is that while we take a WM-side grab, like a compositor
grab or a resizing grab, we need to remove the focus from the Wayland
client.
We make a special exception for CLICKING operations, because these
are really an internal state machine while you're pressing on a button
inside a frame, and in this case, we need to not kill the focus.
A careful analysis of mutter's codebase shows that nothing actually
passes anything but 0 to this. gnome-shell has one instance, but it's
most likely a mistake.
Remove the grab_mask field and the one place in keybindings.c that uses it.
The parameter to begin_grab_op is left in for API compatibility reasons.
Compositors haven't been able to manage more than one screen for
quite a while. Merge MetaCompScreen into MetaCompositor, and update
the API to match.
We still keep MetaScreen in the public compositor API for compatibility
purposes.
We previously separated out MetaDisplay and MetaScreen. mutter
would only manage one screen, but we still kept a list of screens
for simplicity.
With Wayland support, we no longer care about the ability to
manage more than one screen at a time. Remove this by killing
the list of screens, in favor of having just one MetaScreen
in MetaDisplay.
We also kill off active_screen at the same time, since it's
not necessary anymore.
A future cleanup should merge MetaDisplay and MetaScreen. To avoid
breaking API, we should probably keep MetaScreen around as a dummy
type.
display.c is getting a bit crowded. Move most of the handling
out to another file, events.c.
The long-term goal is to have generic event handling here, with
backend-specific handling for the types of windows and such.
If we have a CLICKING grab op we still need to send events to xwayland
so that we get them back for gtk+ to process thus we can't steer
wayland input focus away from it.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
This ensures that we send the proper leave and enter events to wayland
clients.
Particularly, this solves a bug in SSD xwayland windows where clicking
and dragging on the title bar to move the window only works on the odd
turn (unless the pointer moves away from the title bar between
tries). This happens because xwayland gets a button press but doesn't
see the release so when it gets the next button press it discards it
because its pointer button tracking logic says that the button is
already pressed. Sending the proper wayland pointer leave event fixes
it since wayland clients must forget about button state at that point.
https://bugzilla.gnome.org/show_bug.cgi?id=726123
At one point, it was supported to run mutter without a compositor,
but we don't allow that any longer. A lot of code already assumes
display->compositor exists and doesn't check for a NULL pointer,
so just kill the rest of the checks.
Make it a compile-time flag rather than a run-time flag, because
practically any time you're going to be debugging event spewing,
you're going to have to recompile anyway. Remove the WITH_VERBOSE_MODE
checks, too.
Which is used for Wayland popup grabs.
The issue here is that we don't want the code that raises or focuses
windows based on mouse ops to run while a client has a grab.
We still keep the "old" grab infrastructure in place for now, but
ideally we'd replace it eventually with a better grab-op infrastructure.
The only events we handle as XIEvents are FocusIn/Out, Enter and
Leave. Motion, ButtonPress/Release, KeyPress/Release are handled
through clutter instead.
Among other things, this means we don't need to fake motion compression
by peeking over gdk event queue...
Mouse event handling was duplicated, resulting in weird interactions
if clutter was allowed to see certain events (for example under
wayland, where it gets all events). Because now clutter sees all
X events, even when running as an x11 compositor, we can handle
everything using the clutter variants.
At the same time, rewrite a little the passive button grab code,
to make it clear what is being matched on what and why.
The rendering logic before was somewhat complex. We had three independent
cases to take into account when doing rendering:
* X11 compositor. In this case, we're a traditional X11 compositor,
not a Wayland compositor. We use XCompositeNameWindowPixmap to get
the backing pixmap for the window, and deal with the COMPOSITE
extension messiness.
In this case, meta_is_wayland_compositor() is FALSE.
* Wayland clients. In this case, we're a Wayland compositor managing
Wayland surfaces. The rendering for this is fairly straightforward,
as Cogl handles most of the complexity with EGL and SHM buffers...
Wayland clients give us the input and opaque regions through
wl_surface.
In this case, meta_is_wayland_compositor() is TRUE and
priv->window->client_type == META_WINDOW_CLIENT_TYPE_WAYLAND.
* XWayland clients. In this case, we're a Wayland compositor, like
above, and XWayland hands us Wayland surfaces. XWayland handles
the COMPOSITE extension messiness for us, and hands us a buffer
like any other Wayland client. We have to fetch the input and
opaque regions from the X11 window ourselves.
In this case, meta_is_wayland_compositor() is TRUE and
priv->window->client_type == META_WINDOW_CLIENT_TYPE_X11.
We now split the rendering logic into two subclasses, which are:
* MetaSurfaceActorX11, which handles the X11 compositor case, in that
it uses XCompositeNameWindowPixmap to get the backing pixmap, and
deal with all the COMPOSITE extension messiness.
* MetaSurfaceActorWayland, which handles the Wayland compositor case
for both native Wayland clients and XWayland clients. XWayland handles
COMPOSITE for us, and handles pushing a surface over through the
xf86-video-wayland DDX.
Frame sync is still in MetaWindowActor, as it needs to work for both the
X11 compositor and XWayland client cases. When Wayland's video display
protocol lands, this will need to be significantly overhauled, as it would
have to work for any wl_surface, including subsurfaces, so we would need
surface-level discretion.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
The input region was set on the shaped texture, but the shaped texture
was never picked properly, as it was never set to be reactive. Move the
pick implementation and reactivity to the MetaSurfaceActor, and update
the code everywhere else to expect a MetaSurfaceActor.
I implemented pinging, but never actually enabled the feature
properly on Wayland surfaces by setting the net_wm_ping hint to
TRUE, causing the fallback path to always be hit.
Rename net_wm_ping to can_ping so it doesn't take on an
implementation-specific meaning, and set it for all Wayland windows.
This was a bad idea, as ping/pong has moved to a client-specific
request/event pair, rather than a surface-specific one. Revert
the changes we made here and correct the code to make up for it.
This reverts commit aa3643cdde.
Traditionally, WMs unmap windows when minimizing them, and map them
when restoring them or wanting to show them for other reasons, like
upon creation.
However, as metacity morphed into mutter, we optionally chose to keep
windows mapped for the lifetime of the window under the user option
"live-window-previews", which makes the code keep windows mapped so it
can show window preview for minimized windows in other places, like
Alt-Tab and Expose.
I removed this preference two years ago mechanically, by removing all
the if statements, but never went through and cleaned up the code so
that windows are simply mapped for the lifetime of the window -- the
"architecture" of the old code that maps and unmaps on show/hide was
still there.
Remove this now.
The one case we still need to be careful of is shaded windows, in which
we do still unmap the client window. In the future, we might want to
show previews of shaded windows in the overview and Alt-Tab. In that
we'd also keep shaded windows mapped, and could remove all unmap logic,
but we'd need a more complex method of showing the shaded titlebar, such
as using a different actor.
At the same time, simplify the compositor interface by removing
meta_compositor_window_[un]mapped API, and instead adding/removing the
window on-demand.
https://bugzilla.gnome.org/show_bug.cgi?id=720631
The goal here is to make MetaWindow represent a toplevel, managed window,
regardless of if it's X11 or Wayland, and build an abstraction layer up.
Right now, most of the X11 code is in core/ and the wayland code in wayland/,
but in the future, I want to move a lot of the X11 code to a new toplevel, x11/.
X11 window frames use special UI grab ops, like META_GRAB_OP_CLICKING_MAXIMIZE,
in order to work properly. As the frames in this case are X11 clients, we need
to pass through X events in this case. So, similar to how handle_xevent works,
use two variables, bypass_clutter, and bypass_wayland, and set them when we
handle specific events.
Traditionally, WMs unmap windows when minimizing them, and map them
when restoring them or wanting to show them for other reasons, like
upon creation.
However, as metacity morphed into mutter, we optionally chose to keep
windows mapped for the lifetime of the window under the user option
"live-window-previews", which makes the code keep windows mapped so it
can show window preview for minimized windows in other places, like
Alt-Tab and Expose.
I removed this preference two years ago mechanically, by removing all
the if statements, but never went through and cleaned up the code so
that windows are simply mapped for the lifetime of the window -- the
"architecture" of the old code that maps and unmaps on show/hide was
still there.
Remove this now.
The one case we still need to be careful of is shaded windows, in which
we do still unmap the client window. Theoretically, we might want to
show previews of shaded windows in the overview and Alt-Tab, so we remove
the complex unmap tracking for this later.
Remove some obvious server grabs from the window creation codepath,
also ones that are taken at startup.
During startup, there is no need to grab: we install the event handlers
before querying for the already-existing windows, so there is no danger
that we will 'lose' some window. We might try to create a window twice
(if it comes back in the original query and then we get an event for it)
but the code is already protected against such conditions.
When windows are created later, we also do not need grabs, we just need
appropriate error checking as the window may be destroyed at any time
(or it may have already been destroyed).
The stack tracker is unaffected here - as it listens to CreateNotify and
DestroyNotify events and responds directly, the internal stack
representation will always be consistent even if the window goes away while
we are processing MapRequest or similar.
Now that there are no grabs we don't have to worry about explicitly calling
display_notify_window after grabs have been dropped. Fold that into
meta_window_new_shared().
https://bugzilla.gnome.org/show_bug.cgi?id=721345
Server grabs are not as evil as you might expect, but there is agreement
in that their usage should be limited.
Server grabs can cause things to go rather wrong when mutter emits
a signal while it has grabbed the server. If the receiver of that signal
waits for a synchronous action performed by another client, then you
have a deadlock. This happens with Mali binary GLESv2 drivers :(
https://bugzilla.gnome.org/show_bug.cgi?id=721345
The compositor code used to handle X windows that didn't have a
corresponding MetaWindow (see commit d538690b), which is why the
attribute query is separated.
As that doesn't happen any more, we can clean up. No functional changes.
Suggested by Owen Taylor.
https://bugzilla.gnome.org/show_bug.cgi?id=721345
When a client spontaneously focuses their window, perhaps in response
to WM_TAKE_FOCUS we'll get a FocusOut/FocusIn pair with same serial.
Updating display->focus_serial in response to FocusOut then was causing
us to ignore FocusIn and think that the focus was not on any window.
We need to distinguish this spontaneous case from the case where we
set the focus ourselves - when we set the focus ourselves, we're careful
to combine the SetFocus with a property change so that we know definitively
what focus events we have already accounted for.
https://bugzilla.gnome.org/show_bug.cgi?id=720558