1
0
Fork 0
mutter-performance-source/cogl/tests
Jonas Ådahl ef85d1a643 Add meson build support
This commit adds meson build support to mutter. It takes a step away
from the three separate code bases with three different autotools setups
into a single meson build system. There are still places that can be
unified better, for example by removing various "config.h" style files
from cogl and clutter, centralizing debug C flags and other configurable
macros, and similar artifacts that are there only because they were once
separate code bases.

There are some differences between the autotools setup and the new
meson. Here are a few:

The meson setup doesn't generate wrapper scripts for various cogl and
clutter test cases. What these tests did was more or less generate a
tiny script that called an executable with a test name as the argument.
To run particular tests, just run the test executable with the name of
the test as the argument.

The meson setup doesn't install test files anymore. The autotools test
suite was designed towards working with installed tests, but it didn't
really still, and now with meson, it doesn't install anything at all,
but instead makes sure that everything runs with the uninstalled input
files, binaries and libraries when running the test suite. Installable
tests may come later.

Tests from cogl, clutter and mutter are run on 'meson test'. In
autotools, only cogl and clutter tests were run on 'make check'.
2018-11-06 18:51:44 +01:00
..
conform Add meson build support 2018-11-06 18:51:44 +01:00
data move everything into a cogl/ directory 2016-04-22 16:44:31 +02:00
micro-perf Add meson build support 2018-11-06 18:51:44 +01:00
unit Add meson build support 2018-11-06 18:51:44 +01:00
config.env.in cogl: Remove support for GLESv1 2018-11-06 17:17:36 +01:00
Makefile.am build: Namespace installed tests of private libraries 2016-04-29 14:49:03 +02:00
meson.build Add meson build support 2018-11-06 18:51:44 +01:00
README move everything into a cogl/ directory 2016-04-22 16:44:31 +02:00
run-tests.sh cogl: Pass unit-tests file to run-tests.sh 2018-11-06 17:17:36 +01:00
test-launcher.sh move everything into a cogl/ directory 2016-04-22 16:44:31 +02:00

Outline of test categories:

The conform/ tests:
-------------------
These tests should be non-interactive unit-tests that verify a single
feature is behaving as documented. See conform/ADDING_NEW_TESTS for more
details.

Although it may seem a bit awkward; all the tests are built into a
single binary because it makes building the tests *much* faster by avoiding
lots of linking.

Each test has a wrapper script generated though so running the individual tests
should be convenient enough. Running the wrapper script will also print out for
convenience how you could run the test under gdb or valgrind like this for
example:

  NOTE: For debugging purposes, you can run this single test as follows:
  $ libtool --mode=execute \
            gdb --eval-command="b test_cogl_depth_test" \
            --args ./test-conformance -p /conform/cogl/test_cogl_depth_test
  or:
  $ env G_SLICE=always-malloc \
    libtool --mode=execute \
            valgrind ./test-conformance -p /conform/cogl/test_cogl_depth_test

By default the conformance tests are run offscreen. This makes the tests run
much faster and they also don't interfere with other work you may want to do by
constantly stealing focus. CoglOnscreen framebuffers obviously don't get tested
this way so it's important that the tests also get run onscreen every once in a
while, especially if changes are being made to CoglFramebuffer related code.
Onscreen testing can be enabled by setting COGL_TEST_ONSCREEN=1 in your
environment.

The micro-bench/ tests:
-----------------------
These should be focused performance tests, ideally testing a
single metric. Please never forget that these tests are synthetic and if you
are using them then you understand what metric is being tested. They probably
don't reflect any real world application loads and the intention is that you
use these tests once you have already determined the crux of your problem and
need focused feedback that your changes are indeed improving matters. There is
no exit status requirements for these tests, but they should give clear
feedback as to their performance. If the framerate is the feedback metric, then
the test should forcibly enable FPS debugging.

The data/ directory:
--------------------
This contains optional data (like images) that can be referenced by a test.


Misc notes:
-----------
• All tests should ideally include a detailed description in the source
explaining exactly what the test is for, how the test was designed to work,
and possibly a rationale for the approach taken for testing.

• When running tests under Valgrind, you should follow the instructions
available here:

        http://live.gnome.org/Valgrind

and also use the suppression file available inside the data/ directory.