Using sized internal formats is required to make sure we actually get
the precision that we want.
The formats should also be renderable, otherwise they can not be used as
a framebuffer attachment. For GLES we have to check for a bunch of
extensions and fall back to internal formats with more bits when they
are not available.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3429>
In GLES 2.0 the required color-renderable formats to support to not
include 8bpc RGB/RGBA formats. The extension adds this requirement and
makes GLES 2.0 actually useable. Not specifying a gl internal format in
Cogl so far has hidden the problem and let the implementation fall back
to RGB585 for example.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3429>
Also be more strict about what we consider RGBA1010102 support. Before
GLES 3.0, using ReadPixels on a framebuffers with format RGB10_A2 was
not possible with type GL_UNSIGNED_INT_2_10_10_10_REV_EXT and thus our
code to read back pixels could fail.
Users of cogl should check those feature flags before using FP16 and
RGBA1010102 pixel formats via CoglFeatureIDs:
* COGL_FEATURE_ID_TEXTURE_RGBA1010102
* COGL_FEATURE_ID_TEXTURE_HALF_FLOAT
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3429>
If the EGL_KHR_no_config_context extension is supported, use it to
choose a format per onscreen which is compatible with the scanout CRTC
and the GL rendering API used.
Suggested by Jonas Ådahl.
v2:
* Drop code which checked for GLES3 renderability. Makes no sense for
various reasons, in particular that EGLconfigs are about EGLSurfaces,
whereas secondary GPU contexts use an FBO for blitting.
* Use error parameter directly for meta_renderer_native_choose_gbm_format
call (Jonas Ådahl)
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3139>
If the EGL_KHR_no_config_context extension is supported, pass
EGL_NO_CONFIG_KHR to eglCreateContext. This will allow binding the
context to surfaces created with different configs.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3139>
In profilers with a timeline or flame graph views it is a very common
scenario that a span name must be displayed in an area too short to fit
it. In this case, profilers may implement automatic shortening to show
the most important part of the span name in the available area. This
makes it easier to tell what's going on without having to zoom all the
way in.
The current trace span names in Mutter don't really follow any system
and cannot really be shortened automatically.
The Tracy profiler shortens with C++ in mind. Consider an example C++
name:
SomeNamespace::SomeClass::some_method(args)
The method name is the most important part, and the arguments with the
class name will be cut if necessary in the order of importance.
This logic makes sence for other languages too, like Rust. I can see it
being implemented in other profilers like Sysprof, since it's generally
useful.
Hence, this commit adjusts our trace names to look like C++ and arrange
the parts of the name in the respective order of importance.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3402>
Scoped traces are less error prone, and they can still be ended
prematurely if needed (this commit makes that work). The only case this
doesn't support is starting a trace inside a scope but ending outside,
but this is pretty unusual, plus we have anchored traces for a limited
variation of that.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3396>
Group all the three config files from clutter/cogl/meta into one
and also remove unnused configurations and replace duplicated ones
This also fixes Cogl usage of HAS_X11/HAS_XLIB to match the expected
build options
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3368>
Make CoglBuffer an abstract class and inherit the various Cogl*Buffer types from it.
As none of the subclasses is overriding the vtable functions, they were not turned into
vfuncs but plain function pointers in CoglBuffer.
We still use _cogl_buffer_initialize until we port the various params into actual construct-only
properties, similar to the previous commit for CoglTexture.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3193>
- Make Texture a parent GObject class and move the vtable funcs as vfuncs
instead of an interface as we would like to have dispose free the TextureLoader.
- Make the various texture sub-types inherit from it.
- Make all the sub-types constructors return a CoglTexture instead of their respective
specific type. As most of the times, the used functions accept a CoglTexture,
like all the GTK widgets constructors returning GtkWidget.
- Fix up the basics of gi-docgen for all these types.
- Remove CoglPrimitiveTexture as it is useless: It is just a texture underhood.
- Remove CoglMetaTexture: for the exact same reason as above.
- Switch various memory management functions to use g_ variant instead of the cogl_ one
Note we would still want to get rid of the _cogl_texture_init which is something
for the next commit
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3193>
These were leftovers from times when GL(ES) 1.x was still supported.
As we require HAVE_COGL_GL and/or HAVE_COGL_GLES2 to be defined, all
cases for checking both are now redundant.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3338>
These functions ends-up calling gdk-pixbuf for loading textures/bitmaps
from a file and they don't seem to be used anywhere.
These changes are only useful with the following up commit.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3097>
Every `mtk_x11_error_trap_push()` must be paired
with an `mtk_x11_error_trap_pop[_with_return]()` call
otherwise all future errors will be caught and ignored
even if they shouldn't be.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3328>
They are needed as "subformats" for higher bit YCbCr formats, such as
P010, and we don't plan to use or expose them otherwise. Thus don't
implement any conversion or packing features.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3244>
Or else `glXQueryDrawable` will fail per the `GLX_EXT_buffer_age` spec:
> If querying GLX_BACK_BUFFER_AGE_EXT and <draw> is not bound to
> the calling thread's current context a GLXBadDrawable error is
> generated.
This mistake went unnoticed until `mtk_x11_error_trap_push` was introduced
(55e3b2e519) because for some reason it is incapable of trapping
`glXQueryDrawable`. Prior to that it seems
`cogl_onscreen_glx_get_buffer_age` would trap and so always returned zero.
This means we're reenabling clipped redraws on X11 here for the first
time in a long time.
Closes: https://gitlab.gnome.org/GNOME/mutter/-/issues/3007
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3255>
Dropped obsolete Free Software Foundation address pointing
to the FSF website instead as suggested by
https://www.gnu.org/licenses/gpl-howto.html
keeping intact the important part of the historical notice
as requested by the license.
Resolving rpmlint reported issue E: incorrect-fsf-address.
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3155>
In future commits, we will want to create DMA-BUFs with pixel
formats other than COGL_PIXEL_FORMAT_BGRX_8888. In preparation
for that, let's start passing a new pixel format parameter to
this function, and the corresponding winsys vfunc.
All callers of this function pass COGL_PIXEL_FORMAT_BGRX_8888
for now. Next commits will change that.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3175>
So we can properly handle matching DRM and WL_SHM formats in a unified
manner.
Add extensive testing between these and existing pre-multiplied alpha
formats, i.e. all formats we support on Wayland.
Note that unfortunately for some format combinations the value in the
alpha channel is not cleared as expected, likely because of fast-paths
in Cogl. If both source and destination format is opaque, it always
works, however. This thereby includes all cases where they are the same.
Co-Authored-By: Jonas Ådahl <jadahl@gmail.com>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3065>
On GLES2 reading and writing some Cogl formats is not supported
natively. In those cases we use another format to do the reading and
writing. When the internal format and the temporary format differ in
premultiplication, Cogl tries to adjust for it.
Opaque Cogl formats don't have the premult bit set but our internal
format is a premult format. Cogl tries to adjust for it but completely
misses that the opaque format doesn't have an alpha channel and it
should not do so at all.
So skip the premult adjusting when the Cogl format has no alpha channel.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3065>
As it is the only place where cogl depends directly on cairo minus
the whole cairo_region_t.
The motivation behind the removal of this helper is to reduce the usage
of cairo in libmutter is to potentially completely drop it in
certain places or replace it with pixman.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3079>
glGetIntegerv() with GL_RED_BITS/GL_GREEN_BITS/GL_BLUE_BITS/etc. is not
supported with the GL core context, so there is no point in falling back
to that without supporting COGL_PRIVATE_FEATURE_QUERY_FRAMEBUFFER_BITS,
as this will cause an GL error.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3047>
The function ensure_bits_initialized() in cogl-gl-framebuffer-fbo.c
checks for COGL_PRIVATE_FEATURE_QUERY_FRAMEBUFFER_BITS whereas the same
in cogl-gl-framebuffer-back.c simply checks for the driver being
COGL_DRIVER_GL3.
Change the later to use the COGL_PRIVATE_FEATURE_QUERY_FRAMEBUFFER_BITS
flag as well.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3047>
Cogl's feature COGL_PRIVATE_FEATURE_QUERY_FRAMEBUFFER_BITS is required
to use the GL_FRAMEBUFFER_ATTACHMENT_* queries.
Unfortunately, the test for the availability of the private feature is
actually inverted in ensure_bits_initialized() which causes that whole
portion of code to be ignored, falling back to the glGetIntegerv()
method which isn't supported in core profiles.
As Mesa has recently started to be more strict about these, this causes
the CI tests to fail in mutter.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3047>
Don't try to handle things by threads enabling/disabling the main trace
context on-demand, just have a clear start/stop API. For the D-Bus API,
it becomes more straight forward, and for the persistent variant too, as
it avoids having to pass garbage input when it's known that arguments
will be discarded.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2998>
The only consumer of this type of rect was the scissor clipping,
which was removed by the previous commit.
Remove window rects from CoglClipStack, and all dependent code.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3006>
Both Clutter and Cogl use g_return(_val)_if_fail() to safeguard
introspected API. Release builds were dropping these checks, which could
result in a much more crashy experience, especially when considering
extensions, but also due to bugs in the shell code itself.
This won't affect any major distro, because they all use "plain" builds.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2930>
This is a X request that may result in errors, so it is better
to have covered by an error trap.
It is thus far not, explicitly at least, which means other less
lenient error traps might not like what happens here. Make the
error trap threeway between backend, x11 and cogl happen less
by chance here.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2864>
This will later be emitted when a scanout failed, e.g. by the not-test commit
failing for some reason, or drmModePageFlip() failing even if the
pre-conditions for scanout in the simple KMS backend passed.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2854>
Marking the the depth/stencil as discarded before swapping buffers for
the screen signals the GPU that we don't need to keep them around for
the future.
This helps performance by reducing memory bandwidth usage in some GPUs
which may optimize to not write those buffers back to memory at all
after rendering, when they would just be cleared right after that
anyway.
It is not necessary to mark buffers as discarded after swapping buffers.
This should have no effect according to the spec (since that is going to
be followed by new rendering commands which make the buffer valid again)
and removing that has shown no impact in performance tests.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2091>
cogl_framebuffer_discard_buffers required that the color buffer is
passed, although users might want to discard e.g. just depth and/or
stencil buffers.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2091>
When sysprof-4 and libsysprof-capture-4 are installed into different
prefixes, such as with Nix package manager, the D-Bus interfaces
are likely not discoverable from the latter package.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2572>
OpenGL requires more hand holding in the driver regarding what pixel
memory layouts can be written when calling glReadPixels(), compared to
GLES2. Lets move the details of this logic to the corresponding
backends, so in the future, the GLES2 backend can be adapted to handle
more formats, without placing that logic in the generic layer.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
COGL_PIXEL_FORMAT_ABGR_2101010 is defined to mean the 2 A bits are
placed in a 32 bit unsigned integer on the bits with highest
significance, followed by B on the following 10 bits, and so on, until R
on the 10 least significant bits.
UNSIGNED_INT_2_10_10_10_REV_EXT is defined to represent color channels
as
```
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
-------------------------------------------------------------------------------------
| a | b | g | r |
-------------------------------------------------------------------------------------
```
As can be seen, this matches COGL_PIXEL_FORMAT_ABGR_2101010, meaning
that's the format we can directly read and write.
In Cogl, when finding the GL formats, we get the tuple with the GL
format given the format we pass, but we also get returned "required
format" (CoglPixelFormat). This required format represents the format
that is required when reading actual pixels from GLES. In GLES, the
above mentioned format is the only one supported by the
EXT_texture_type_2_10_10_10_REV extension, thus for other types, we need
to do the CPU side conversion ourselves. To achieve this, correctly
return COGL_PIXEL_FORMAT_ABGR_2101010 as the required format.
The internal format should also be GL_RGB10_A2, not GL_RGBA.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2461>
Unused since https://gitlab.gnome.org/GNOME/mutter/merge_requests/932
AFAICT.
As a bonus, gets rid of these compiler warnings:
../cogl/cogl/cogl-pipeline-layer-state.c: In function ‘cogl_pipeline_get_layer_min_filter’:
../cogl/cogl/cogl-pipeline-layer-state.c:1279:10: warning: ‘min_filter’ may be used uninitialized [-Wmaybe-uninitialized]
1279 | return min_filter;
| ^~~~~~~~~~
../cogl/cogl/cogl-pipeline-layer-state.c:1274:22: note: ‘min_filter’ was declared here
1274 | CoglPipelineFilter min_filter;
| ^~~~~~~~~~
../cogl/cogl/cogl-pipeline-layer-state.c: In function ‘cogl_pipeline_get_layer_mag_filter’:
../cogl/cogl/cogl-pipeline-layer-state.c:1291:10: warning: ‘mag_filter’ may be used uninitialized [-Wmaybe-uninitialized]
1291 | return mag_filter;
| ^~~~~~~~~~
../cogl/cogl/cogl-pipeline-layer-state.c:1287:22: note: ‘mag_filter’ was declared here
1287 | CoglPipelineFilter mag_filter;
| ^~~~~~~~~~
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2757>
Both the fragend and vertend shader state was called
"CoglPipelineShaderState", which was rather annoying, especially when
the type needs to be exposed outside of the .c file as part of moving
out unit tests. Make the types unique. This also avoids confusing what
type one is looking at.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2555>
It consists of only a macro and build description logic.
Adds a macro for simpler tests that doesn't require a context; unit
tests requiring a context should use the same framework as conform
tests.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2555>
All working tests have already migrated to the test suite using mutter;
move the old unported tests over too, and remove the conform test
framework, as it is no longer used.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2555>
They're just typedefs to CoglObject and CoglObjectClass, respectively,
and the latter is unused already. The former is used in a couple of
deprecated headers, which can easily be changed.
Change all CoglHandleObject instances to CoglObject, and drop the types
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2355>
There is no need to make CoglJournal a CoglObject, nor any kind of
object for that matter, since it doesn't require refcounting at all.
CoglJournal is entirely private to, and managed by, CoglFramebuffer,
and it only needs to create and destroy it.
Make CoglJournal a free-form struct, and adjust CoglFramebuffer to
call _cogl_journal_free() instead of cogl_object_unref().
Related: https://gitlab.gnome.org/GNOME/mutter/-/issues/2087
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2355>
DMA buffers might be allocatable, but it doesn't mean the driver doesn't
fail when we try to allocate a buffer with an implicit modifier. Using
the proprietary NVIDIA driver for example, it will fail. Lets catch this
up front and avoid advertising DMA buffer support when we know it won't
work.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2383>
Previously, we would only check for EXT_swap_buffers_with_damage which
generally will find an implementation. However, some EGL implementations
do not appear to support that naming of the extension, preferring to
only advertise KHR_swap_buffers_with_damage.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2316>
Returns TRUE if the active renderer backend can allocate DMA buffers.
This is the case hardware accelerated GBM backends, but FALSE for
surfaceless (i.e. no render node) and EGLDevice (legacy NVIDIA paths).
While software based gbm devices can allocate DMA buffers, we don't want
to allocate them for offscreen rendering, as we really only use these
for inter process transfers, and as buffers allocated for scanout
doesn't use the relevant API, making it return FALSE for these solves
that.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1939>
Because both code paths require the existence of `GL_TIMESTAMP[_EXT]`
which is only guaranteed if `ARB_timer_query` (included in GL core 3.3)
is implemented.
We know when that is true because `context->glGenQueries` and
`context->glQueryCounter` are non-NULL. So that is the minimum
requirement for any use of `GL_TIMESTAMP`, even when it is used in
`glGetInteger64v`.
Until now, Raspberry Pi (OpenGL 2.1) would find a working implementation
of `glGetInteger64v` but failed to check whether the driver understands
`GL_TIMESTAMP` (it doesn't).
Fixes: https://gitlab.gnome.org/GNOME/mutter/-/issues/2107
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2253>
When Cogl gained support for importing pixmaps, I think there was a
misunderstanding that there is a difference in how it works in GLX and
EGL where GLX needs to rebind the pixmap in order to guarantee that
changes are reflected in the texture after it detects damage, whereas
with EGL it doesn’t. The GLX spec makes it pretty clear that it does
need to rebind whereas the EGL spec is a bit harder to follow. As a
fallout from Mesa MR 12869, it seems like the compositor really does
need to rebind the image to comply with the spec. Notably, in
OES_EGL_image_external there is:
"Binding (or re-binding if already bound) an external texture by calling
BindTexture after all modifications are complete guarantees that
sampling done in future draw calls will return values corresponding to
the values in the buffer at or after the time that BindTexture is
called."
So this commit changes the x11_damage_notify handler for EGL to lazily
queue a rebind like GLX does. The code that binds the image while
allocating the texture has been moved into a reusable helper function.
It seems like there is a bit of a layering violation when accessing the
GL driver internals from the EGL winsys code, but I noticed that the GLX
code also includes the driver GL headers and otherwise it seems pretty
tricky to do properly.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2062>
Because POSIX sh was, with hindsight, not a particularly well-designed
programming language, if we don't 'set -e', then we'll respond to failure
of a setup command such as cd by carrying on regardless.
Signed-off-by: Simon McVittie <smcv@debian.org>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2009>
The assumption here seems to be that it's an overlay onto the
current environment which would make sense; but the implementation in
gnome-desktop-testing currently removes all other environment variables
(see GNOME/gnome-desktop-testing#1). This causes test failure when the
tests are run in Debian's autopkgtest framework, possibly because PATH
is cleared.
Signed-off-by: Simon McVittie <smcv@debian.org>
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2009>
Add support for a cogl function to set the damage_region on an onscreen
framebuffer.
The goal of this is to enable using the EGL_KHR_partial_update extension
which can potentially reduce memory bandwidth usage by some GPUs,
particularly on embedded GPUs that operate on a tiling rendering model.
Part-of: <https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2023>