1
0
Fork 0

Compare commits

...

29 commits

Author SHA1 Message Date
Daniel van Vugt
5488009f59
clutter/frame-clock: Optimize latency for platforms missing TIMESTAMP_QUERY
Previously if we had no measurements then `compute_max_render_time_us`
would pessimise its answer to ensure triple buffering could be reached:
```
if (frame_clock->state == CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE)
  ret += refresh_interval_us;
```
But that also meant entering triple buffering even when not required.

Now we make `compute_max_render_time_us` more honest and return failure
if the answer isn't known (or is disabled). This in turn allows us to
optimize `calculate_next_update_time_us` for this special case, ensuring
triple buffering can be used, but isn't blindly always used.

This makes a visible difference to the latency when dragging windows in
Xorg, but will also help Wayland sessions on platforms lacking
TIMESTAMP_QUERY such as Raspberry Pi.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:18 +09:00
Daniel van Vugt
6e7297e764
clutter/frame-clock: Record measurements of zero for cursor-only updates
But only if we've ever got actual swap measurements
(COGL_FEATURE_ID_TIMESTAMP_QUERY). If it's supported then we now drop to
double buffering and get optimal latency on a burst of cursor-only
updates.

Closes: https://launchpad.net/bugs/2023363
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:18 +09:00
Daniel van Vugt
e3b2344420
onscreen/native: Avoid callbacks on "detached" onscreens
Detached onscreens have no valid view so avoid servicing callbacks on
them during/after sleep mode. As previously mentioned in 45bda2d969.

Closes: https://launchpad.net/bugs/2020049
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:18 +09:00
Daniel van Vugt
b935844a4c
tests/native-kms-render: Fix failing client-scanout test
It was assuming an immediate transition from compositing (triple
buffering) to direct scanout (double buffering), whereas there is
a one frame delay in that transition as the buffer queue shrinks.
We don't lose any frames in the transition.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:18 +09:00
Daniel van Vugt
d345de78c2
clutter/frame-clock: Conditionally disable triple buffering
1. When direct scanout is attempted

There's no compositing during direct scanout so the "render" time is zero.
Thus there is no need to implement triple buffering for direct scanouts.
Stick to double buffering and enjoy the lower latency.

2. If disabled by environment variable MUTTER_DEBUG_TRIPLE_BUFFERING

With possible values {never, auto, always} where auto is the default.

3. When VRR is in use

VRR calls `clutter_frame_clock_schedule_update_now` which would keep
the buffer queue full, which in turn prevented direct scanout mode.
Because OnscreenNative currently only supports direct scanout with
double buffering.

We now break that feedback loop by preventing triple buffering from
being scheduled when the frame clock mode becomes variable. Long term
this could also be solved by supporting triple buffering in direct
scanout mode. But whether or not that would be desirable given the
latency penalty remains to be seen.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:18 +09:00
Daniel van Vugt
6523517350
clutter: Pass ClutterFrameHint(s) to the frame clock
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:18 +09:00
Daniel van Vugt
f366a7d931
backends: Flag that the frame attempted direct scanout
We need this hint whether direct scanout succeeds or fails because it's
the mechanism by which we will tell the clock to enforce double buffering,
thus making direct scanout possible on future frames. Triple buffering
will be disabled until such time that direct scanout is not being attempted.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
fcea00f63a
clutter/frame: Add ClutterFrameHint to ClutterFrame
This will allow the backend to provide performance hints to the frame
clock in future.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
a1e6d2242b
clutter/frame-clock: Log N-buffers in CLUTTTER_DEBUG=frame-timings
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
0b2e48db6f
clutter/frame-clock: Add triple buffering support
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
2ae303bb95
clutter/frame-clock: Merge states DISPATCHING and PENDING_PRESENTED
Chronologically they already overlap in time as presentation may
complete in the middle of the dispatch function, otherwise they are
contiguous in time. And most switch statements treated the two states
the same already so they're easy to merge into a single `DISPATCHED`
state.

Having fewer states now will make life easier when we add more states
later.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
a4ac229578
clutter/frame-clock: Lower the threshold for disabling error diffusion
Error diffusion was introduced in 0555a5bbc1 for Nvidia where last
presentation time is always unknown (zero). Dispatch times would drift
apart always being a fraction of a frame late, and accumulated to cause
periodic frame skips. So error diffusion corrected that precisely and
avoided the skips.

That works great with double buffering but less great with triple
buffering. It's certainly still needed with triple buffering but
correcting for a lateness of many milliseconds isn't a good idea. That's
because a dispatch being that late is not due to main loop jitter but due
to Nvidia's swap buffers blocking when the queue is full. So scheduling
the next frame even earlier using last_dispatch_lateness_us would just
perpetuate the problem of swap buffers blocking for too long.

So now we lower the threshold of when error diffusion gets disabled. It's
still high enough to fix the original smoothness problem it was for, but
now low enough to detect Nvidia's occasionally blocking swaps and backs
off in that case.

Since the average duration of a blocking swap is half a frame interval
and we want to distinguish between that and sub-millisecond jitter, the
logical threshold is halfway again: refresh_interval_us/4.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
a0248cb618
renderer/native: Discard pending swaps when rebuilding views
It's analogous to discard_pending_page_flips but represents swaps that
might become flips after the next frame notification callbacks, thanks
to triple buffering. Since the views are being rebuilt and their onscreens
are about to be destroyed, turning those swaps into more flips/posts would
just lead to unexpected behaviour (like trying to flip on a half-destroyed
inactive CRTC).

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
bc1ec8e24e
onscreen/native: Skip try_post_latest_swap if shutting down
Otherwise we could get:

  meta_kms_prepare_shutdown ->
  flush_callbacks ->
  ... ->
  try_post_latest_swap ->
  post and queue more callbacks

So later in shutdown those callbacks would trigger an assertion failure
in meta_kms_impl_device_atomic_finalize:

  g_hash_table_size (impl_device_atomic->page_flip_datas) == 0

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
501a5cc512
onscreen/native: Add function meta_onscreen_native_discard_pending_swaps
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:17 +09:00
Daniel van Vugt
66dd0826a8
onscreen/native: Increase secondary GPU dumb_fbs from 2 to 3
So that they don't get overwritten prematurely during triple buffering
causing tearing.

https://launchpad.net/bugs/1999216
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:16 +09:00
Daniel van Vugt
febb9a4261
onscreen/native: Defer posting if there's already a post in progress
And when the number of pending posts decreases we know it's safe to submit
a new one. Since KMS generally only supports one outstanding post right now,
"decreases" means equal to zero.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:16 +09:00
Daniel van Vugt
167b013b99
onscreen/native: Insert a 'posted' frame between 'next' and 'presented'
This will allow us to keep track of up to two buffers that have been
swapped but not yet scanning out, for triple buffering.

This commit replaces mutter!1968

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:16 +09:00
Daniel van Vugt
044997b8cc
onscreen/native: Split swap_buffers_with_damage into two functions
1. The EGL part: meta_onscreen_native_swap_buffers_with_damage
2. The KMS part: post_latest_swap

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:16 +09:00
Daniel van Vugt
7edfbcceb7
onscreen/native: Deduplicate calls to clutter_frame_set_result
All paths out of `meta_onscreen_native_swap_buffers_with_damage` from
here onward would set the same `CLUTTER_FRAME_RESULT_PENDING_PRESENTED`
(or terminate with `g_assert_not_reached`).

Even failed posts set this result because they will do a
`meta_onscreen_native_notify_frame_complete` in
`page_flip_feedback_discarded`.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:16 +09:00
Daniel van Vugt
3205e666fe
onscreen/native: Replace an assertion that double buffering is the maximum
Because it soon won't be the maximum. But we do want to verify that the
frame info queue is not empty, to avoid NULL dereferencing and catch logic
errors.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:16 +09:00
Daniel van Vugt
fbfaeb56a6
onscreen/native: Log swapbuffers and N-buffering when MUTTER_DEBUG=kms
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:16 +09:00
Daniel van Vugt
5dc8b2f73a
backends/native: Add set/get_damage functions to MetaFrameNative
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:16 +09:00
Daniel van Vugt
d4e9b1f8d5
renderer/native: Steal the power save flip list before iterating over it
Because a single iteration might also grow the list again.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:15 +09:00
Daniel van Vugt
fbac742306
renderer/native: Avoid requeuing the same onscreen for a power save flip
This is a case that triple buffering will encounter. We don't want it
to queue the same onscreen multiple times because that would represent
multiple flips occurring simultaneously.

It's a linear search but the list length is typically only 1 or 2 so
no need for anything fancier yet.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:15 +09:00
Daniel van Vugt
bd521be148
kms: Keep a shutting_down flag
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:15 +09:00
Daniel van Vugt
8ed6470b31
cogl/onscreen: Indent declaration parameters to align with above
This fixes warnings from check-code-style.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:15 +09:00
Daniel van Vugt
e02a8e15b1
cogl/onscreen: Add function cogl_onscreen_get_pending_frame_count
Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:31:15 +09:00
Gert-dev
4726186224
onscreen/native: Use EGLSyncs instead of cogl_framebuffer_finish
cogl_framebuffer_finish can result in a CPU-side stall because it waits for
the primary GPU to flush and execute all commands that were queued before
that. By using a GPU-side EGLSync we can let the primary GPU inform us when
it is done with the queued commands instead. We then create another EGLSync
on the secondary GPU using the same fd so the primary GPU effectively
signals the secondary GPU when it is done rendering, causing the latter
to wait for the former before copying part of the frames it needs for
monitors attached to it directly.

This solves the corruption that cogl_framebuffer_finish also solved, but
without needing a CPU-side stall.

Signed-off-by: Mingi Sung <sungmg@saltyming.net>
2024-09-15 14:30:55 +09:00
19 changed files with 1000 additions and 165 deletions

View file

@ -42,6 +42,15 @@ enum
static guint signals[N_SIGNALS]; static guint signals[N_SIGNALS];
typedef enum
{
TRIPLE_BUFFERING_MODE_NEVER,
TRIPLE_BUFFERING_MODE_AUTO,
TRIPLE_BUFFERING_MODE_ALWAYS,
} TripleBufferingMode;
static TripleBufferingMode triple_buffering_mode = TRIPLE_BUFFERING_MODE_AUTO;
#define SYNC_DELAY_FALLBACK_FRACTION 0.875f #define SYNC_DELAY_FALLBACK_FRACTION 0.875f
#define MINIMUM_REFRESH_RATE 30.f #define MINIMUM_REFRESH_RATE 30.f
@ -70,8 +79,10 @@ typedef enum _ClutterFrameClockState
CLUTTER_FRAME_CLOCK_STATE_IDLE, CLUTTER_FRAME_CLOCK_STATE_IDLE,
CLUTTER_FRAME_CLOCK_STATE_SCHEDULED, CLUTTER_FRAME_CLOCK_STATE_SCHEDULED,
CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW, CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW,
CLUTTER_FRAME_CLOCK_STATE_DISPATCHING, CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE,
CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED, CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED,
CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW,
CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO,
} ClutterFrameClockState; } ClutterFrameClockState;
struct _ClutterFrameClock struct _ClutterFrameClock
@ -92,6 +103,7 @@ struct _ClutterFrameClock
ClutterFrameClockMode mode; ClutterFrameClockMode mode;
int64_t last_dispatch_time_us; int64_t last_dispatch_time_us;
int64_t prev_last_dispatch_time_us;
int64_t last_dispatch_lateness_us; int64_t last_dispatch_lateness_us;
int64_t last_presentation_time_us; int64_t last_presentation_time_us;
int64_t next_update_time_us; int64_t next_update_time_us;
@ -113,6 +125,9 @@ struct _ClutterFrameClock
int64_t vblank_duration_us; int64_t vblank_duration_us;
/* Last KMS buffer submission time. */ /* Last KMS buffer submission time. */
int64_t last_flip_time_us; int64_t last_flip_time_us;
int64_t prev_last_flip_time_us;
ClutterFrameHint last_flip_hints;
/* Last time we promoted short-term maximum to long-term one */ /* Last time we promoted short-term maximum to long-term one */
int64_t longterm_promotion_us; int64_t longterm_promotion_us;
@ -249,10 +264,6 @@ static void
maybe_update_longterm_max_duration_us (ClutterFrameClock *frame_clock, maybe_update_longterm_max_duration_us (ClutterFrameClock *frame_clock,
ClutterFrameInfo *frame_info) ClutterFrameInfo *frame_info)
{ {
/* Do not update long-term max if there has been no measurement */
if (!frame_clock->shortterm_max_update_duration_us)
return;
if ((frame_info->presentation_time - frame_clock->longterm_promotion_us) < if ((frame_info->presentation_time - frame_clock->longterm_promotion_us) <
G_USEC_PER_SEC) G_USEC_PER_SEC)
return; return;
@ -279,6 +290,12 @@ void
clutter_frame_clock_notify_presented (ClutterFrameClock *frame_clock, clutter_frame_clock_notify_presented (ClutterFrameClock *frame_clock,
ClutterFrameInfo *frame_info) ClutterFrameInfo *frame_info)
{ {
#ifdef CLUTTER_ENABLE_DEBUG
const char *debug_state =
frame_clock->state == CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO ?
"Triple buffering" : "Double buffering";
#endif
COGL_TRACE_BEGIN_SCOPED (ClutterFrameClockNotifyPresented, COGL_TRACE_BEGIN_SCOPED (ClutterFrameClockNotifyPresented,
"Clutter::FrameClock::presented()"); "Clutter::FrameClock::presented()");
COGL_TRACE_DESCRIBE (ClutterFrameClockNotifyPresented, COGL_TRACE_DESCRIBE (ClutterFrameClockNotifyPresented,
@ -368,22 +385,54 @@ clutter_frame_clock_notify_presented (ClutterFrameClock *frame_clock,
frame_clock->got_measurements_last_frame = FALSE; frame_clock->got_measurements_last_frame = FALSE;
if (frame_info->cpu_time_before_buffer_swap_us != 0 && if ((frame_info->cpu_time_before_buffer_swap_us != 0 &&
frame_info->has_valid_gpu_rendering_duration) frame_info->has_valid_gpu_rendering_duration) ||
frame_clock->ever_got_measurements)
{ {
int64_t dispatch_to_swap_us, swap_to_rendering_done_us, swap_to_flip_us; int64_t dispatch_to_swap_us, swap_to_rendering_done_us, swap_to_flip_us;
int64_t dispatch_time_us = 0, flip_time_us = 0;
dispatch_to_swap_us = switch (frame_clock->state)
frame_info->cpu_time_before_buffer_swap_us - {
frame_clock->last_dispatch_time_us; case CLUTTER_FRAME_CLOCK_STATE_INIT:
case CLUTTER_FRAME_CLOCK_STATE_IDLE:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW:
g_warn_if_reached ();
G_GNUC_FALLTHROUGH;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
dispatch_time_us = frame_clock->last_dispatch_time_us;
flip_time_us = frame_clock->last_flip_time_us;
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
dispatch_time_us = frame_clock->prev_last_dispatch_time_us;
flip_time_us = frame_clock->prev_last_flip_time_us;
break;
}
if (frame_info->cpu_time_before_buffer_swap_us == 0)
{
/* User thread cursor-only updates with no "swap": we do know
* the combined time from dispatch to flip at least.
*/
dispatch_to_swap_us = 0;
swap_to_flip_us = flip_time_us - dispatch_time_us;
}
else
{
dispatch_to_swap_us = frame_info->cpu_time_before_buffer_swap_us -
dispatch_time_us;
swap_to_flip_us = flip_time_us -
frame_info->cpu_time_before_buffer_swap_us;
}
swap_to_rendering_done_us = swap_to_rendering_done_us =
frame_info->gpu_rendering_duration_ns / 1000; frame_info->gpu_rendering_duration_ns / 1000;
swap_to_flip_us =
frame_clock->last_flip_time_us -
frame_info->cpu_time_before_buffer_swap_us;
CLUTTER_NOTE (FRAME_TIMINGS, CLUTTER_NOTE (FRAME_TIMINGS,
"update2dispatch %ld µs, dispatch2swap %ld µs, swap2render %ld µs, swap2flip %ld µs", "%s: update2dispatch %ld µs, dispatch2swap %ld µs, swap2render %ld µs, swap2flip %ld µs",
debug_state,
frame_clock->last_dispatch_lateness_us, frame_clock->last_dispatch_lateness_us,
dispatch_to_swap_us, dispatch_to_swap_us,
swap_to_rendering_done_us, swap_to_rendering_done_us,
@ -394,7 +443,7 @@ clutter_frame_clock_notify_presented (ClutterFrameClock *frame_clock,
MAX (swap_to_rendering_done_us, swap_to_flip_us) + MAX (swap_to_rendering_done_us, swap_to_flip_us) +
frame_clock->deadline_evasion_us, frame_clock->deadline_evasion_us,
frame_clock->shortterm_max_update_duration_us, frame_clock->shortterm_max_update_duration_us,
frame_clock->refresh_interval_us); 2 * frame_clock->refresh_interval_us);
maybe_update_longterm_max_duration_us (frame_clock, frame_info); maybe_update_longterm_max_duration_us (frame_clock, frame_info);
@ -403,7 +452,8 @@ clutter_frame_clock_notify_presented (ClutterFrameClock *frame_clock,
} }
else else
{ {
CLUTTER_NOTE (FRAME_TIMINGS, "update2dispatch %ld µs", CLUTTER_NOTE (FRAME_TIMINGS, "%s: update2dispatch %ld µs",
debug_state,
frame_clock->last_dispatch_lateness_us); frame_clock->last_dispatch_lateness_us);
} }
@ -421,11 +471,22 @@ clutter_frame_clock_notify_presented (ClutterFrameClock *frame_clock,
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW: case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW:
g_warn_if_reached (); g_warn_if_reached ();
break; break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHING: case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
case CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE; frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE;
maybe_reschedule_update (frame_clock); maybe_reschedule_update (frame_clock);
break; break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED;
maybe_reschedule_update (frame_clock);
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW;
maybe_reschedule_update (frame_clock);
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE;
maybe_reschedule_update (frame_clock);
break;
} }
} }
@ -443,26 +504,37 @@ clutter_frame_clock_notify_ready (ClutterFrameClock *frame_clock)
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW: case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW:
g_warn_if_reached (); g_warn_if_reached ();
break; break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHING: case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
case CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE; frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE;
maybe_reschedule_update (frame_clock); maybe_reschedule_update (frame_clock);
break; break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED;
maybe_reschedule_update (frame_clock);
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW;
maybe_reschedule_update (frame_clock);
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE;
maybe_reschedule_update (frame_clock);
break;
} }
} }
static int64_t static gboolean
clutter_frame_clock_compute_max_render_time_us (ClutterFrameClock *frame_clock) clutter_frame_clock_compute_max_render_time_us (ClutterFrameClock *frame_clock,
int64_t *max_render_time_us)
{ {
int64_t refresh_interval_us; int64_t refresh_interval_us;
int64_t max_render_time_us;
refresh_interval_us = frame_clock->refresh_interval_us; refresh_interval_us = frame_clock->refresh_interval_us;
if (!frame_clock->ever_got_measurements || if (!frame_clock->ever_got_measurements ||
G_UNLIKELY (clutter_paint_debug_flags & G_UNLIKELY (clutter_paint_debug_flags &
CLUTTER_DEBUG_DISABLE_DYNAMIC_MAX_RENDER_TIME)) CLUTTER_DEBUG_DISABLE_DYNAMIC_MAX_RENDER_TIME))
return (int64_t) (refresh_interval_us * SYNC_DELAY_FALLBACK_FRACTION); return FALSE;
/* Max render time shows how early the frame clock needs to be dispatched /* Max render time shows how early the frame clock needs to be dispatched
* to make it to the predicted next presentation time. It is an estimate of * to make it to the predicted next presentation time. It is an estimate of
@ -476,15 +548,15 @@ clutter_frame_clock_compute_max_render_time_us (ClutterFrameClock *frame_clock)
* - The duration of vertical blank. * - The duration of vertical blank.
* - A constant to account for variations in the above estimates. * - A constant to account for variations in the above estimates.
*/ */
max_render_time_us = *max_render_time_us =
MAX (frame_clock->longterm_max_update_duration_us, MAX (frame_clock->longterm_max_update_duration_us,
frame_clock->shortterm_max_update_duration_us) + frame_clock->shortterm_max_update_duration_us) +
frame_clock->vblank_duration_us + frame_clock->vblank_duration_us +
clutter_max_render_time_constant_us; clutter_max_render_time_constant_us;
max_render_time_us = CLAMP (max_render_time_us, 0, refresh_interval_us); *max_render_time_us = CLAMP (*max_render_time_us, 0, 2 * refresh_interval_us);
return max_render_time_us; return TRUE;
} }
static void static void
@ -499,7 +571,9 @@ calculate_next_update_time_us (ClutterFrameClock *frame_clock,
int64_t min_render_time_allowed_us; int64_t min_render_time_allowed_us;
int64_t max_render_time_allowed_us; int64_t max_render_time_allowed_us;
int64_t next_presentation_time_us; int64_t next_presentation_time_us;
int64_t next_smooth_presentation_time_us = 0;
int64_t next_update_time_us; int64_t next_update_time_us;
gboolean max_render_time_is_known;
now_us = g_get_monotonic_time (); now_us = g_get_monotonic_time ();
@ -519,10 +593,13 @@ calculate_next_update_time_us (ClutterFrameClock *frame_clock,
} }
min_render_time_allowed_us = refresh_interval_us / 2; min_render_time_allowed_us = refresh_interval_us / 2;
max_render_time_allowed_us =
clutter_frame_clock_compute_max_render_time_us (frame_clock);
if (min_render_time_allowed_us > max_render_time_allowed_us) max_render_time_is_known =
clutter_frame_clock_compute_max_render_time_us (frame_clock,
&max_render_time_allowed_us);
if (max_render_time_is_known &&
min_render_time_allowed_us > max_render_time_allowed_us)
min_render_time_allowed_us = max_render_time_allowed_us; min_render_time_allowed_us = max_render_time_allowed_us;
/* /*
@ -543,7 +620,29 @@ calculate_next_update_time_us (ClutterFrameClock *frame_clock,
* *
*/ */
last_presentation_time_us = frame_clock->last_presentation_time_us; last_presentation_time_us = frame_clock->last_presentation_time_us;
next_presentation_time_us = last_presentation_time_us + refresh_interval_us; switch (frame_clock->state)
{
case CLUTTER_FRAME_CLOCK_STATE_INIT:
case CLUTTER_FRAME_CLOCK_STATE_IDLE:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW:
next_smooth_presentation_time_us = last_presentation_time_us +
refresh_interval_us;
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
next_smooth_presentation_time_us = last_presentation_time_us +
2 * refresh_interval_us;
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
g_warn_if_reached (); /* quad buffering would be a bug */
next_smooth_presentation_time_us = last_presentation_time_us +
3 * refresh_interval_us;
break;
}
next_presentation_time_us = next_smooth_presentation_time_us;
/* /*
* However, the last presentation could have happened more than a frame ago. * However, the last presentation could have happened more than a frame ago.
@ -610,7 +709,7 @@ calculate_next_update_time_us (ClutterFrameClock *frame_clock,
} }
if (frame_clock->last_presentation_flags & CLUTTER_FRAME_INFO_FLAG_VSYNC && if (frame_clock->last_presentation_flags & CLUTTER_FRAME_INFO_FLAG_VSYNC &&
next_presentation_time_us != last_presentation_time_us + refresh_interval_us) next_presentation_time_us != next_smooth_presentation_time_us)
{ {
/* There was an idle period since the last presentation, so there seems /* There was an idle period since the last presentation, so there seems
* be no constantly updating actor. In this case it's best to start * be no constantly updating actor. In this case it's best to start
@ -622,6 +721,24 @@ calculate_next_update_time_us (ClutterFrameClock *frame_clock,
} }
else else
{ {
/* If the max render time isn't known then using the current value of
* next_presentation_time_us is suboptimal. Targeting always one frame
* prior to that we'd lose the ability to scale up to triple buffering
* on late presentation. But targeting two frames prior we would be
* always triple buffering even when not required.
* So the algorithm for deciding when to scale up to triple buffering
* in the absence of render time measurements is to simply target full
* frame rate. If we're keeping up then we'll stay double buffering. If
* we're not keeping up then this will switch us to triple buffering.
*/
if (!max_render_time_is_known)
{
max_render_time_allowed_us =
(int64_t) (refresh_interval_us * SYNC_DELAY_FALLBACK_FRACTION);
next_presentation_time_us =
last_presentation_time_us + refresh_interval_us;
}
while (next_presentation_time_us - min_render_time_allowed_us < now_us) while (next_presentation_time_us - min_render_time_allowed_us < now_us)
next_presentation_time_us += refresh_interval_us; next_presentation_time_us += refresh_interval_us;
@ -653,7 +770,9 @@ calculate_next_variable_update_time_us (ClutterFrameClock *frame_clock,
refresh_interval_us = frame_clock->refresh_interval_us; refresh_interval_us = frame_clock->refresh_interval_us;
if (frame_clock->last_presentation_time_us == 0) if (frame_clock->last_presentation_time_us == 0 ||
!clutter_frame_clock_compute_max_render_time_us (frame_clock,
&max_render_time_allowed_us))
{ {
*out_next_update_time_us = *out_next_update_time_us =
frame_clock->last_dispatch_time_us ? frame_clock->last_dispatch_time_us ?
@ -666,9 +785,6 @@ calculate_next_variable_update_time_us (ClutterFrameClock *frame_clock,
return; return;
} }
max_render_time_allowed_us =
clutter_frame_clock_compute_max_render_time_us (frame_clock);
last_presentation_time_us = frame_clock->last_presentation_time_us; last_presentation_time_us = frame_clock->last_presentation_time_us;
next_presentation_time_us = last_presentation_time_us + refresh_interval_us; next_presentation_time_us = last_presentation_time_us + refresh_interval_us;
@ -742,8 +858,17 @@ clutter_frame_clock_inhibit (ClutterFrameClock *frame_clock)
frame_clock->pending_reschedule_now = TRUE; frame_clock->pending_reschedule_now = TRUE;
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE; frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE;
break; break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHING: case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED: frame_clock->pending_reschedule = TRUE;
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE;
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
frame_clock->pending_reschedule = TRUE;
frame_clock->pending_reschedule_now = TRUE;
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE;
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
break; break;
} }
@ -762,6 +887,25 @@ clutter_frame_clock_uninhibit (ClutterFrameClock *frame_clock)
maybe_reschedule_update (frame_clock); maybe_reschedule_update (frame_clock);
} }
static gboolean
want_triple_buffering (ClutterFrameClock *frame_clock)
{
switch (triple_buffering_mode)
{
case TRIPLE_BUFFERING_MODE_NEVER:
return FALSE;
case TRIPLE_BUFFERING_MODE_AUTO:
return frame_clock->mode == CLUTTER_FRAME_CLOCK_MODE_FIXED &&
!(frame_clock->last_flip_hints &
CLUTTER_FRAME_HINT_DIRECT_SCANOUT_ATTEMPTED);
case TRIPLE_BUFFERING_MODE_ALWAYS:
return TRUE;
}
g_assert_not_reached ();
return FALSE;
}
void void
clutter_frame_clock_schedule_update_now (ClutterFrameClock *frame_clock) clutter_frame_clock_schedule_update_now (ClutterFrameClock *frame_clock)
{ {
@ -779,11 +923,24 @@ clutter_frame_clock_schedule_update_now (ClutterFrameClock *frame_clock)
case CLUTTER_FRAME_CLOCK_STATE_INIT: case CLUTTER_FRAME_CLOCK_STATE_INIT:
case CLUTTER_FRAME_CLOCK_STATE_IDLE: case CLUTTER_FRAME_CLOCK_STATE_IDLE:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED: case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW;
break; break;
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW: case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
return; return;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHING: case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED: frame_clock->state =
CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW;
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
if (want_triple_buffering (frame_clock))
{
frame_clock->state =
CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW;
break;
}
G_GNUC_FALLTHROUGH;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
frame_clock->pending_reschedule = TRUE; frame_clock->pending_reschedule = TRUE;
frame_clock->pending_reschedule_now = TRUE; frame_clock->pending_reschedule_now = TRUE;
return; return;
@ -812,13 +969,17 @@ clutter_frame_clock_schedule_update_now (ClutterFrameClock *frame_clock)
frame_clock->next_update_time_us = next_update_time_us; frame_clock->next_update_time_us = next_update_time_us;
g_source_set_ready_time (frame_clock->source, next_update_time_us); g_source_set_ready_time (frame_clock->source, next_update_time_us);
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW;
} }
void void
clutter_frame_clock_schedule_update (ClutterFrameClock *frame_clock) clutter_frame_clock_schedule_update (ClutterFrameClock *frame_clock)
{ {
int64_t next_update_time_us = -1; int64_t next_update_time_us = -1;
TripleBufferingMode current_mode = triple_buffering_mode;
if (current_mode == TRIPLE_BUFFERING_MODE_AUTO &&
!want_triple_buffering (frame_clock))
current_mode = TRIPLE_BUFFERING_MODE_NEVER;
if (frame_clock->inhibit_count > 0) if (frame_clock->inhibit_count > 0)
{ {
@ -834,12 +995,33 @@ clutter_frame_clock_schedule_update (ClutterFrameClock *frame_clock)
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED; frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED;
return; return;
case CLUTTER_FRAME_CLOCK_STATE_IDLE: case CLUTTER_FRAME_CLOCK_STATE_IDLE:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED;
break; break;
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED: case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW: case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
return; return;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHING: case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
case CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED: switch (current_mode)
{
case TRIPLE_BUFFERING_MODE_NEVER:
frame_clock->pending_reschedule = TRUE;
return;
case TRIPLE_BUFFERING_MODE_AUTO:
frame_clock->state =
CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED;
break;
case TRIPLE_BUFFERING_MODE_ALWAYS:
next_update_time_us = g_get_monotonic_time ();
frame_clock->next_presentation_time_us = 0;
frame_clock->is_next_presentation_time_valid = FALSE;
frame_clock->state =
CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED;
goto got_update_time;
}
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
frame_clock->pending_reschedule = TRUE; frame_clock->pending_reschedule = TRUE;
return; return;
} }
@ -864,11 +1046,11 @@ clutter_frame_clock_schedule_update (ClutterFrameClock *frame_clock)
break; break;
} }
got_update_time:
g_warn_if_fail (next_update_time_us != -1); g_warn_if_fail (next_update_time_us != -1);
frame_clock->next_update_time_us = next_update_time_us; frame_clock->next_update_time_us = next_update_time_us;
g_source_set_ready_time (frame_clock->source, next_update_time_us); g_source_set_ready_time (frame_clock->source, next_update_time_us);
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED;
} }
void void
@ -884,6 +1066,8 @@ clutter_frame_clock_set_mode (ClutterFrameClock *frame_clock,
{ {
case CLUTTER_FRAME_CLOCK_STATE_INIT: case CLUTTER_FRAME_CLOCK_STATE_INIT:
case CLUTTER_FRAME_CLOCK_STATE_IDLE: case CLUTTER_FRAME_CLOCK_STATE_IDLE:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
break; break;
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED: case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED:
frame_clock->pending_reschedule = TRUE; frame_clock->pending_reschedule = TRUE;
@ -894,8 +1078,14 @@ clutter_frame_clock_set_mode (ClutterFrameClock *frame_clock,
frame_clock->pending_reschedule_now = TRUE; frame_clock->pending_reschedule_now = TRUE;
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE; frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE;
break; break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHING: case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED: frame_clock->pending_reschedule = TRUE;
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE;
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
frame_clock->pending_reschedule = TRUE;
frame_clock->pending_reschedule_now = TRUE;
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE;
break; break;
} }
@ -931,7 +1121,7 @@ clutter_frame_clock_dispatch (ClutterFrameClock *frame_clock,
frame_clock->refresh_interval_us; frame_clock->refresh_interval_us;
lateness_us = time_us - ideal_dispatch_time_us; lateness_us = time_us - ideal_dispatch_time_us;
if (lateness_us < 0 || lateness_us >= frame_clock->refresh_interval_us) if (lateness_us < 0 || lateness_us >= frame_clock->refresh_interval_us / 4)
frame_clock->last_dispatch_lateness_us = 0; frame_clock->last_dispatch_lateness_us = 0;
else else
frame_clock->last_dispatch_lateness_us = lateness_us; frame_clock->last_dispatch_lateness_us = lateness_us;
@ -952,10 +1142,27 @@ clutter_frame_clock_dispatch (ClutterFrameClock *frame_clock,
} }
#endif #endif
frame_clock->prev_last_dispatch_time_us = frame_clock->last_dispatch_time_us;
frame_clock->last_dispatch_time_us = time_us; frame_clock->last_dispatch_time_us = time_us;
g_source_set_ready_time (frame_clock->source, -1); g_source_set_ready_time (frame_clock->source, -1);
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHING; switch (frame_clock->state)
{
case CLUTTER_FRAME_CLOCK_STATE_INIT:
case CLUTTER_FRAME_CLOCK_STATE_IDLE:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
g_warn_if_reached ();
return;
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE;
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO;
break;
}
frame_count = frame_clock->frame_count++; frame_count = frame_clock->frame_count++;
@ -986,26 +1193,36 @@ clutter_frame_clock_dispatch (ClutterFrameClock *frame_clock,
result = iface->frame (frame_clock, frame, frame_clock->listener.user_data); result = iface->frame (frame_clock, frame, frame_clock->listener.user_data);
COGL_TRACE_END (ClutterFrameClockFrame); COGL_TRACE_END (ClutterFrameClockFrame);
switch (frame_clock->state) switch (result)
{ {
case CLUTTER_FRAME_CLOCK_STATE_INIT: case CLUTTER_FRAME_RESULT_PENDING_PRESENTED:
case CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED:
g_warn_if_reached ();
break; break;
case CLUTTER_FRAME_CLOCK_STATE_IDLE: case CLUTTER_FRAME_RESULT_IDLE:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED: /* The frame was aborted; nothing to paint/present */
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW: switch (frame_clock->state)
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHING:
switch (result)
{ {
case CLUTTER_FRAME_RESULT_PENDING_PRESENTED: case CLUTTER_FRAME_CLOCK_STATE_INIT:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_PENDING_PRESENTED; case CLUTTER_FRAME_CLOCK_STATE_IDLE:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED:
case CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW:
g_warn_if_reached ();
break; break;
case CLUTTER_FRAME_RESULT_IDLE: case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE; frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_IDLE;
maybe_reschedule_update (frame_clock); maybe_reschedule_update (frame_clock);
break; break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED;
maybe_reschedule_update (frame_clock);
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE_AND_SCHEDULED_NOW:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_SCHEDULED_NOW;
maybe_reschedule_update (frame_clock);
break;
case CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_TWO:
frame_clock->state = CLUTTER_FRAME_CLOCK_STATE_DISPATCHED_ONE;
maybe_reschedule_update (frame_clock);
break;
} }
break; break;
} }
@ -1038,21 +1255,31 @@ frame_clock_source_dispatch (GSource *source,
} }
void void
clutter_frame_clock_record_flip_time (ClutterFrameClock *frame_clock, clutter_frame_clock_record_flip (ClutterFrameClock *frame_clock,
int64_t flip_time_us) int64_t flip_time_us,
ClutterFrameHint hints)
{ {
frame_clock->prev_last_flip_time_us = frame_clock->last_flip_time_us;
frame_clock->last_flip_time_us = flip_time_us; frame_clock->last_flip_time_us = flip_time_us;
frame_clock->last_flip_hints = hints;
} }
GString * GString *
clutter_frame_clock_get_max_render_time_debug_info (ClutterFrameClock *frame_clock) clutter_frame_clock_get_max_render_time_debug_info (ClutterFrameClock *frame_clock)
{ {
int64_t max_render_time_us;
int64_t max_update_duration_us; int64_t max_update_duration_us;
GString *string; GString *string;
string = g_string_new (NULL); string = g_string_new ("Max render time: ");
g_string_append_printf (string, "Max render time: %ld µs", if (!clutter_frame_clock_compute_max_render_time_us (frame_clock,
clutter_frame_clock_compute_max_render_time_us (frame_clock)); &max_render_time_us))
{
g_string_append (string, "unknown");
return string;
}
g_string_append_printf (string, "%ld µs", max_render_time_us);
if (frame_clock->got_measurements_last_frame) if (frame_clock->got_measurements_last_frame)
g_string_append_printf (string, " ="); g_string_append_printf (string, " =");
@ -1219,8 +1446,6 @@ clutter_frame_clock_dispose (GObject *object)
{ {
ClutterFrameClock *frame_clock = CLUTTER_FRAME_CLOCK (object); ClutterFrameClock *frame_clock = CLUTTER_FRAME_CLOCK (object);
g_warn_if_fail (frame_clock->state != CLUTTER_FRAME_CLOCK_STATE_DISPATCHING);
if (frame_clock->source) if (frame_clock->source)
{ {
g_signal_emit (frame_clock, signals[DESTROY], 0); g_signal_emit (frame_clock, signals[DESTROY], 0);
@ -1244,6 +1469,15 @@ static void
clutter_frame_clock_class_init (ClutterFrameClockClass *klass) clutter_frame_clock_class_init (ClutterFrameClockClass *klass)
{ {
GObjectClass *object_class = G_OBJECT_CLASS (klass); GObjectClass *object_class = G_OBJECT_CLASS (klass);
const char *mode_str;
mode_str = g_getenv ("MUTTER_DEBUG_TRIPLE_BUFFERING");
if (!g_strcmp0 (mode_str, "never"))
triple_buffering_mode = TRIPLE_BUFFERING_MODE_NEVER;
else if (!g_strcmp0 (mode_str, "auto"))
triple_buffering_mode = TRIPLE_BUFFERING_MODE_AUTO;
else if (!g_strcmp0 (mode_str, "always"))
triple_buffering_mode = TRIPLE_BUFFERING_MODE_ALWAYS;
object_class->dispose = clutter_frame_clock_dispose; object_class->dispose = clutter_frame_clock_dispose;

View file

@ -33,6 +33,12 @@ typedef enum _ClutterFrameResult
CLUTTER_FRAME_RESULT_IDLE, CLUTTER_FRAME_RESULT_IDLE,
} ClutterFrameResult; } ClutterFrameResult;
typedef enum _ClutterFrameHint
{
CLUTTER_FRAME_HINT_NONE = 0,
CLUTTER_FRAME_HINT_DIRECT_SCANOUT_ATTEMPTED = 1 << 0,
} ClutterFrameHint;
#define CLUTTER_TYPE_FRAME_CLOCK (clutter_frame_clock_get_type ()) #define CLUTTER_TYPE_FRAME_CLOCK (clutter_frame_clock_get_type ())
CLUTTER_EXPORT CLUTTER_EXPORT
G_DECLARE_FINAL_TYPE (ClutterFrameClock, clutter_frame_clock, G_DECLARE_FINAL_TYPE (ClutterFrameClock, clutter_frame_clock,
@ -102,8 +108,9 @@ void clutter_frame_clock_remove_timeline (ClutterFrameClock *frame_clock,
CLUTTER_EXPORT CLUTTER_EXPORT
float clutter_frame_clock_get_refresh_rate (ClutterFrameClock *frame_clock); float clutter_frame_clock_get_refresh_rate (ClutterFrameClock *frame_clock);
void clutter_frame_clock_record_flip_time (ClutterFrameClock *frame_clock, void clutter_frame_clock_record_flip (ClutterFrameClock *frame_clock,
int64_t flip_time_us); int64_t flip_time_us,
ClutterFrameHint hints);
GString * clutter_frame_clock_get_max_render_time_debug_info (ClutterFrameClock *frame_clock); GString * clutter_frame_clock_get_max_render_time_debug_info (ClutterFrameClock *frame_clock);

View file

@ -36,6 +36,7 @@ struct _ClutterFrame
gboolean has_result; gboolean has_result;
ClutterFrameResult result; ClutterFrameResult result;
ClutterFrameHint hints;
}; };
CLUTTER_EXPORT CLUTTER_EXPORT

View file

@ -115,3 +115,16 @@ clutter_frame_set_result (ClutterFrame *frame,
frame->result = result; frame->result = result;
frame->has_result = TRUE; frame->has_result = TRUE;
} }
void
clutter_frame_set_hint (ClutterFrame *frame,
ClutterFrameHint hint)
{
frame->hints |= hint;
}
ClutterFrameHint
clutter_frame_get_hints (ClutterFrame *frame)
{
return frame->hints;
}

View file

@ -54,4 +54,11 @@ void clutter_frame_set_result (ClutterFrame *frame,
CLUTTER_EXPORT CLUTTER_EXPORT
gboolean clutter_frame_has_result (ClutterFrame *frame); gboolean clutter_frame_has_result (ClutterFrame *frame);
CLUTTER_EXPORT
void clutter_frame_set_hint (ClutterFrame *frame,
ClutterFrameHint hint);
CLUTTER_EXPORT
ClutterFrameHint clutter_frame_get_hints (ClutterFrame *frame);
G_DEFINE_AUTOPTR_CLEANUP_FUNC (ClutterFrame, clutter_frame_unref) G_DEFINE_AUTOPTR_CLEANUP_FUNC (ClutterFrame, clutter_frame_unref)

View file

@ -1075,14 +1075,21 @@ handle_frame_clock_frame (ClutterFrameClock *frame_clock,
_clutter_stage_window_redraw_view (stage_window, view, frame); _clutter_stage_window_redraw_view (stage_window, view, frame);
clutter_frame_clock_record_flip_time (frame_clock, clutter_frame_clock_record_flip (frame_clock,
g_get_monotonic_time ()); g_get_monotonic_time (),
clutter_frame_get_hints (frame));
clutter_stage_emit_after_paint (stage, view, frame); clutter_stage_emit_after_paint (stage, view, frame);
if (clutter_context_get_show_fps (context)) if (clutter_context_get_show_fps (context))
end_frame_timing_measurement (view); end_frame_timing_measurement (view);
} }
else
{
clutter_frame_clock_record_flip (frame_clock,
g_get_monotonic_time (),
clutter_frame_get_hints (frame));
}
_clutter_stage_window_finish_frame (stage_window, view, frame); _clutter_stage_window_finish_frame (stage_window, view, frame);

View file

@ -79,4 +79,7 @@ COGL_EXPORT CoglFrameInfo *
cogl_onscreen_peek_tail_frame_info (CoglOnscreen *onscreen); cogl_onscreen_peek_tail_frame_info (CoglOnscreen *onscreen);
COGL_EXPORT CoglFrameInfo * COGL_EXPORT CoglFrameInfo *
cogl_onscreen_pop_head_frame_info (CoglOnscreen *onscreen); cogl_onscreen_pop_head_frame_info (CoglOnscreen *onscreen);
COGL_EXPORT unsigned int
cogl_onscreen_get_pending_frame_count (CoglOnscreen *onscreen);

View file

@ -468,6 +468,14 @@ cogl_onscreen_pop_head_frame_info (CoglOnscreen *onscreen)
return g_queue_pop_head (&priv->pending_frame_infos); return g_queue_pop_head (&priv->pending_frame_infos);
} }
unsigned int
cogl_onscreen_get_pending_frame_count (CoglOnscreen *onscreen)
{
CoglOnscreenPrivate *priv = cogl_onscreen_get_instance_private (onscreen);
return g_queue_get_length (&priv->pending_frame_infos);
}
CoglFrameClosure * CoglFrameClosure *
cogl_onscreen_add_frame_callback (CoglOnscreen *onscreen, cogl_onscreen_add_frame_callback (CoglOnscreen *onscreen,
CoglFrameCallback callback, CoglFrameCallback callback,

View file

@ -44,6 +44,11 @@ struct _MetaEgl
PFNEGLCREATEIMAGEKHRPROC eglCreateImageKHR; PFNEGLCREATEIMAGEKHRPROC eglCreateImageKHR;
PFNEGLDESTROYIMAGEKHRPROC eglDestroyImageKHR; PFNEGLDESTROYIMAGEKHRPROC eglDestroyImageKHR;
PFNEGLCREATESYNCPROC eglCreateSync;
PFNEGLDESTROYSYNCPROC eglDestroySync;
PFNEGLWAITSYNCPROC eglWaitSync;
PFNEGLDUPNATIVEFENCEFDANDROIDPROC eglDupNativeFenceFDANDROID;
PFNEGLBINDWAYLANDDISPLAYWL eglBindWaylandDisplayWL; PFNEGLBINDWAYLANDDISPLAYWL eglBindWaylandDisplayWL;
PFNEGLQUERYWAYLANDBUFFERWL eglQueryWaylandBufferWL; PFNEGLQUERYWAYLANDBUFFERWL eglQueryWaylandBufferWL;
@ -1162,6 +1167,90 @@ meta_egl_query_display_attrib (MetaEgl *egl,
return TRUE; return TRUE;
} }
gboolean
meta_egl_create_sync (MetaEgl *egl,
EGLDisplay display,
EGLenum type,
const EGLAttrib *attrib_list,
EGLSync *egl_sync,
GError **error)
{
if (!is_egl_proc_valid (egl->eglCreateSync, error))
return FALSE;
EGLSync sync;
sync = egl->eglCreateSync (display, type, attrib_list);
if (sync == EGL_NO_SYNC)
{
set_egl_error (error);
return FALSE;
}
*egl_sync = sync;
return TRUE;
}
gboolean
meta_egl_destroy_sync (MetaEgl *egl,
EGLDisplay display,
EGLSync sync,
GError **error)
{
if (!is_egl_proc_valid (egl->eglDestroySync, error))
return FALSE;
if (!egl->eglDestroySync (display, sync))
{
set_egl_error (error);
return FALSE;
}
return TRUE;
}
gboolean
meta_egl_wait_sync (MetaEgl *egl,
EGLDisplay display,
EGLSync sync,
EGLint flags,
GError **error)
{
if (!is_egl_proc_valid (egl->eglWaitSync, error))
return FALSE;
if (!egl->eglWaitSync (display, sync, flags))
{
set_egl_error (error);
return FALSE;
}
return TRUE;
}
EGLint
meta_egl_duplicate_native_fence_fd (MetaEgl *egl,
EGLDisplay display,
EGLSync sync,
GError **error)
{
if (!is_egl_proc_valid (egl->eglDupNativeFenceFDANDROID, error))
return EGL_NO_NATIVE_FENCE_FD_ANDROID;
EGLint fd = EGL_NO_NATIVE_FENCE_FD_ANDROID;
fd = egl->eglDupNativeFenceFDANDROID (display, sync);
if (fd == EGL_NO_NATIVE_FENCE_FD_ANDROID)
{
set_egl_error (error);
}
return fd;
}
#define GET_EGL_PROC_ADDR(proc) \ #define GET_EGL_PROC_ADDR(proc) \
egl->proc = (void *) eglGetProcAddress (#proc); egl->proc = (void *) eglGetProcAddress (#proc);
@ -1175,6 +1264,11 @@ meta_egl_constructed (GObject *object)
GET_EGL_PROC_ADDR (eglCreateImageKHR); GET_EGL_PROC_ADDR (eglCreateImageKHR);
GET_EGL_PROC_ADDR (eglDestroyImageKHR); GET_EGL_PROC_ADDR (eglDestroyImageKHR);
GET_EGL_PROC_ADDR (eglCreateSync);
GET_EGL_PROC_ADDR (eglDestroySync);
GET_EGL_PROC_ADDR (eglWaitSync);
GET_EGL_PROC_ADDR (eglDupNativeFenceFDANDROID);
GET_EGL_PROC_ADDR (eglBindWaylandDisplayWL); GET_EGL_PROC_ADDR (eglBindWaylandDisplayWL);
GET_EGL_PROC_ADDR (eglQueryWaylandBufferWL); GET_EGL_PROC_ADDR (eglQueryWaylandBufferWL);

View file

@ -276,3 +276,26 @@ gboolean meta_egl_query_display_attrib (MetaEgl *egl,
EGLint attribute, EGLint attribute,
EGLAttrib *value, EGLAttrib *value,
GError **error); GError **error);
gboolean meta_egl_create_sync (MetaEgl *egl,
EGLDisplay display,
EGLenum type,
const EGLAttrib *attrib_list,
EGLSync *egl_sync,
GError **error);
gboolean meta_egl_destroy_sync (MetaEgl *egl,
EGLDisplay display,
EGLSync sync,
GError **error);
gboolean meta_egl_wait_sync (MetaEgl *egl,
EGLDisplay display,
EGLSync sync,
EGLint flags,
GError **error);
EGLint meta_egl_duplicate_native_fence_fd (MetaEgl *egl,
EGLDisplay display,
EGLSync sync,
GError **error);

View file

@ -798,6 +798,8 @@ meta_stage_impl_redraw_view (ClutterStageWindow *stage_window,
{ {
g_autoptr (GError) error = NULL; g_autoptr (GError) error = NULL;
clutter_frame_set_hint (frame, CLUTTER_FRAME_HINT_DIRECT_SCANOUT_ATTEMPTED);
if (meta_stage_impl_scanout_view (stage_impl, if (meta_stage_impl_scanout_view (stage_impl,
stage_view, stage_view,
scanout, scanout,

View file

@ -31,6 +31,11 @@ struct _MetaFrameNative
CoglScanout *scanout; CoglScanout *scanout;
MetaKmsUpdate *kms_update; MetaKmsUpdate *kms_update;
struct {
int n_rectangles;
int *rectangles; /* 4 x n_rectangles */
} damage;
}; };
static void static void
@ -38,6 +43,7 @@ meta_frame_native_release (ClutterFrame *frame)
{ {
MetaFrameNative *frame_native = meta_frame_native_from_frame (frame); MetaFrameNative *frame_native = meta_frame_native_from_frame (frame);
g_clear_pointer (&frame_native->damage.rectangles, g_free);
g_clear_object (&frame_native->buffer); g_clear_object (&frame_native->buffer);
g_clear_object (&frame_native->scanout); g_clear_object (&frame_native->scanout);
@ -108,3 +114,28 @@ meta_frame_native_get_scanout (MetaFrameNative *frame_native)
{ {
return frame_native->scanout; return frame_native->scanout;
} }
void
meta_frame_native_set_damage (MetaFrameNative *frame_native,
const int *rectangles,
int n_rectangles)
{
size_t rectangles_size;
rectangles_size = n_rectangles * 4 * sizeof (int);
frame_native->damage.rectangles =
g_realloc (frame_native->damage.rectangles, rectangles_size);
memcpy (frame_native->damage.rectangles, rectangles, rectangles_size);
frame_native->damage.n_rectangles = n_rectangles;
}
int
meta_frame_native_get_damage (MetaFrameNative *frame_native,
int **rectangles)
{
if (rectangles)
*rectangles = frame_native->damage.rectangles;
return frame_native->damage.n_rectangles;
}

View file

@ -47,3 +47,12 @@ void meta_frame_native_set_scanout (MetaFrameNative *frame_native,
CoglScanout *scanout); CoglScanout *scanout);
CoglScanout * meta_frame_native_get_scanout (MetaFrameNative *frame_native); CoglScanout * meta_frame_native_get_scanout (MetaFrameNative *frame_native);
void
meta_frame_native_set_damage (MetaFrameNative *frame_native,
const int *rectangles,
int n_rectangles);
int
meta_frame_native_get_damage (MetaFrameNative *frame_native,
int **rectangles);

View file

@ -66,6 +66,8 @@ struct _MetaKms
int kernel_thread_inhibit_count; int kernel_thread_inhibit_count;
MetaKmsCursorManager *cursor_manager; MetaKmsCursorManager *cursor_manager;
gboolean shutting_down;
}; };
G_DEFINE_TYPE (MetaKms, meta_kms, META_TYPE_THREAD) G_DEFINE_TYPE (MetaKms, meta_kms, META_TYPE_THREAD)
@ -352,6 +354,12 @@ meta_kms_create_device (MetaKms *kms,
return device; return device;
} }
gboolean
meta_kms_is_shutting_down (MetaKms *kms)
{
return kms->shutting_down;
}
static gpointer static gpointer
prepare_shutdown_in_impl (MetaThreadImpl *thread_impl, prepare_shutdown_in_impl (MetaThreadImpl *thread_impl,
gpointer user_data, gpointer user_data,
@ -367,6 +375,7 @@ static void
on_prepare_shutdown (MetaBackend *backend, on_prepare_shutdown (MetaBackend *backend,
MetaKms *kms) MetaKms *kms)
{ {
kms->shutting_down = TRUE;
meta_kms_run_impl_task_sync (kms, prepare_shutdown_in_impl, NULL, NULL); meta_kms_run_impl_task_sync (kms, prepare_shutdown_in_impl, NULL, NULL);
meta_thread_flush_callbacks (META_THREAD (kms)); meta_thread_flush_callbacks (META_THREAD (kms));

View file

@ -61,6 +61,8 @@ MetaKmsDevice * meta_kms_create_device (MetaKms *kms,
MetaKmsDeviceFlag flags, MetaKmsDeviceFlag flags,
GError **error); GError **error);
gboolean meta_kms_is_shutting_down (MetaKms *kms);
MetaKms * meta_kms_new (MetaBackend *backend, MetaKms * meta_kms_new (MetaBackend *backend,
MetaKmsFlags flags, MetaKmsFlags flags,
GError **error); GError **error);

View file

@ -29,6 +29,7 @@
#include "backends/native/meta-onscreen-native.h" #include "backends/native/meta-onscreen-native.h"
#include <glib/gstdio.h>
#include <drm_fourcc.h> #include <drm_fourcc.h>
#include "backends/meta-egl-ext.h" #include "backends/meta-egl-ext.h"
@ -76,7 +77,7 @@ typedef struct _MetaOnscreenNativeSecondaryGpuState
struct { struct {
MetaDrmBufferDumb *current_dumb_fb; MetaDrmBufferDumb *current_dumb_fb;
MetaDrmBufferDumb *dumb_fbs[2]; MetaDrmBufferDumb *dumb_fbs[3];
} cpu; } cpu;
gboolean noted_primary_gpu_copy_ok; gboolean noted_primary_gpu_copy_ok;
@ -103,6 +104,8 @@ struct _MetaOnscreenNative
MetaOnscreenNativeSecondaryGpuState *secondary_gpu_state; MetaOnscreenNativeSecondaryGpuState *secondary_gpu_state;
ClutterFrame *presented_frame; ClutterFrame *presented_frame;
ClutterFrame *posted_frame;
ClutterFrame *stalled_frame;
ClutterFrame *next_frame; ClutterFrame *next_frame;
struct { struct {
@ -117,6 +120,9 @@ struct _MetaOnscreenNative
} egl; } egl;
#endif #endif
gboolean needs_flush;
unsigned int swaps_pending;
gboolean frame_sync_requested; gboolean frame_sync_requested;
gboolean frame_sync_enabled; gboolean frame_sync_enabled;
@ -138,6 +144,13 @@ G_DEFINE_TYPE (MetaOnscreenNative, meta_onscreen_native,
static GQuark blit_source_quark = 0; static GQuark blit_source_quark = 0;
static void
try_post_latest_swap (CoglOnscreen *onscreen);
static void
post_finish_frame (MetaOnscreenNative *onscreen_native,
MetaKmsUpdate *kms_update);
static gboolean static gboolean
init_secondary_gpu_state (MetaRendererNative *renderer_native, init_secondary_gpu_state (MetaRendererNative *renderer_native,
CoglOnscreen *onscreen, CoglOnscreen *onscreen,
@ -148,20 +161,20 @@ meta_onscreen_native_swap_drm_fb (CoglOnscreen *onscreen)
{ {
MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen); MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen);
if (!onscreen_native->next_frame) if (!onscreen_native->posted_frame)
return; return;
g_clear_pointer (&onscreen_native->presented_frame, clutter_frame_unref); g_clear_pointer (&onscreen_native->presented_frame, clutter_frame_unref);
onscreen_native->presented_frame = onscreen_native->presented_frame =
g_steal_pointer (&onscreen_native->next_frame); g_steal_pointer (&onscreen_native->posted_frame);
} }
static void static void
meta_onscreen_native_clear_next_fb (CoglOnscreen *onscreen) meta_onscreen_native_clear_posted_fb (CoglOnscreen *onscreen)
{ {
MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen); MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen);
g_clear_pointer (&onscreen_native->next_frame, clutter_frame_unref); g_clear_pointer (&onscreen_native->posted_frame, clutter_frame_unref);
} }
static void static void
@ -199,7 +212,7 @@ meta_onscreen_native_notify_frame_complete (CoglOnscreen *onscreen)
info = cogl_onscreen_pop_head_frame_info (onscreen); info = cogl_onscreen_pop_head_frame_info (onscreen);
g_assert (!cogl_onscreen_peek_head_frame_info (onscreen)); g_return_if_fail (info);
_cogl_onscreen_notify_frame_sync (onscreen, info); _cogl_onscreen_notify_frame_sync (onscreen, info);
_cogl_onscreen_notify_complete (onscreen, info); _cogl_onscreen_notify_complete (onscreen, info);
@ -241,6 +254,7 @@ notify_view_crtc_presented (MetaRendererView *view,
meta_onscreen_native_notify_frame_complete (onscreen); meta_onscreen_native_notify_frame_complete (onscreen);
meta_onscreen_native_swap_drm_fb (onscreen); meta_onscreen_native_swap_drm_fb (onscreen);
try_post_latest_swap (onscreen);
} }
static void static void
@ -290,15 +304,13 @@ page_flip_feedback_ready (MetaKmsCrtc *kms_crtc,
CoglFramebuffer *framebuffer = CoglFramebuffer *framebuffer =
clutter_stage_view_get_onscreen (CLUTTER_STAGE_VIEW (view)); clutter_stage_view_get_onscreen (CLUTTER_STAGE_VIEW (view));
CoglOnscreen *onscreen = COGL_ONSCREEN (framebuffer); CoglOnscreen *onscreen = COGL_ONSCREEN (framebuffer);
MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen);
CoglFrameInfo *frame_info; CoglFrameInfo *frame_info;
frame_info = cogl_onscreen_peek_head_frame_info (onscreen); frame_info = cogl_onscreen_peek_head_frame_info (onscreen);
frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC; frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC;
g_warn_if_fail (!onscreen_native->next_frame);
meta_onscreen_native_notify_frame_complete (onscreen); meta_onscreen_native_notify_frame_complete (onscreen);
try_post_latest_swap (onscreen);
} }
static void static void
@ -368,7 +380,8 @@ page_flip_feedback_discarded (MetaKmsCrtc *kms_crtc,
} }
meta_onscreen_native_notify_frame_complete (onscreen); meta_onscreen_native_notify_frame_complete (onscreen);
meta_onscreen_native_clear_next_fb (onscreen); meta_onscreen_native_clear_posted_fb (onscreen);
try_post_latest_swap (onscreen);
} }
static const MetaKmsPageFlipListenerVtable page_flip_listener_vtable = { static const MetaKmsPageFlipListenerVtable page_flip_listener_vtable = {
@ -429,18 +442,36 @@ custom_egl_stream_page_flip (gpointer custom_page_flip_data,
} }
#endif /* HAVE_EGL_DEVICE */ #endif /* HAVE_EGL_DEVICE */
void static void
meta_onscreen_native_dummy_power_save_page_flip (CoglOnscreen *onscreen) drop_stalled_swap (CoglOnscreen *onscreen)
{ {
CoglFrameInfo *frame_info; CoglFrameInfo *frame_info;
MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen);
meta_onscreen_native_swap_drm_fb (onscreen); if (onscreen_native->swaps_pending <= 1)
return;
onscreen_native->swaps_pending--;
g_clear_pointer (&onscreen_native->stalled_frame, clutter_frame_unref);
frame_info = cogl_onscreen_peek_tail_frame_info (onscreen); frame_info = cogl_onscreen_peek_tail_frame_info (onscreen);
frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC; frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC;
meta_onscreen_native_notify_frame_complete (onscreen); meta_onscreen_native_notify_frame_complete (onscreen);
} }
void
meta_onscreen_native_dummy_power_save_page_flip (CoglOnscreen *onscreen)
{
drop_stalled_swap (onscreen);
/* If the monitor just woke up and the shell is fully idle (has nothing
* more to swap) then we just woke to an indefinitely black screen. Let's
* fix that using the last swap (which is never classified as "stalled").
*/
try_post_latest_swap (onscreen);
}
static void static void
apply_transform (MetaCrtcKms *crtc_kms, apply_transform (MetaCrtcKms *crtc_kms,
MetaKmsPlaneAssignment *kms_plane_assignment, MetaKmsPlaneAssignment *kms_plane_assignment,
@ -517,7 +548,7 @@ meta_onscreen_native_flip_crtc (CoglOnscreen *onscreen,
{ {
MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen); MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen);
MetaRendererNative *renderer_native = onscreen_native->renderer_native; MetaRendererNative *renderer_native = onscreen_native->renderer_native;
ClutterFrame *frame = onscreen_native->next_frame; g_autoptr (ClutterFrame) frame = NULL;
MetaFrameNative *frame_native; MetaFrameNative *frame_native;
MetaGpuKms *render_gpu = onscreen_native->render_gpu; MetaGpuKms *render_gpu = onscreen_native->render_gpu;
MetaCrtcKms *crtc_kms = META_CRTC_KMS (crtc); MetaCrtcKms *crtc_kms = META_CRTC_KMS (crtc);
@ -533,6 +564,7 @@ meta_onscreen_native_flip_crtc (CoglOnscreen *onscreen,
COGL_TRACE_BEGIN_SCOPED (MetaOnscreenNativeFlipCrtcs, COGL_TRACE_BEGIN_SCOPED (MetaOnscreenNativeFlipCrtcs,
"Meta::OnscreenNative::flip_crtc()"); "Meta::OnscreenNative::flip_crtc()");
frame = g_steal_pointer (&onscreen_native->next_frame);
g_return_if_fail (frame); g_return_if_fail (frame);
gpu_kms = META_GPU_KMS (meta_crtc_get_gpu (crtc)); gpu_kms = META_GPU_KMS (meta_crtc_get_gpu (crtc));
@ -595,6 +627,10 @@ meta_onscreen_native_flip_crtc (CoglOnscreen *onscreen,
#endif #endif
} }
g_warn_if_fail (!onscreen_native->posted_frame);
g_clear_pointer (&onscreen_native->posted_frame, clutter_frame_unref);
onscreen_native->posted_frame = g_steal_pointer (&frame);
meta_kms_update_add_page_flip_listener (kms_update, meta_kms_update_add_page_flip_listener (kms_update,
kms_crtc, kms_crtc,
&page_flip_listener_vtable, &page_flip_listener_vtable,
@ -844,19 +880,51 @@ copy_shared_framebuffer_gpu (CoglOnscreen *onscreen,
CoglFramebuffer *framebuffer = COGL_FRAMEBUFFER (onscreen); CoglFramebuffer *framebuffer = COGL_FRAMEBUFFER (onscreen);
CoglContext *cogl_context = cogl_framebuffer_get_context (framebuffer); CoglContext *cogl_context = cogl_framebuffer_get_context (framebuffer);
CoglDisplay *cogl_display = cogl_context_get_display (cogl_context); CoglDisplay *cogl_display = cogl_context_get_display (cogl_context);
CoglRendererEGL *cogl_renderer_egl = cogl_context->display->renderer->winsys;
MetaRenderDevice *render_device; MetaRenderDevice *render_device;
EGLDisplay egl_display; EGLDisplay egl_display = NULL;
gboolean use_modifiers; gboolean use_modifiers;
MetaDeviceFile *device_file; MetaDeviceFile *device_file;
MetaDrmBufferFlags flags; MetaDrmBufferFlags flags;
MetaDrmBufferGbm *buffer_gbm = NULL; MetaDrmBufferGbm *buffer_gbm = NULL;
struct gbm_bo *bo; struct gbm_bo *bo;
EGLSync primary_gpu_egl_sync = EGL_NO_SYNC;
EGLSync secondary_gpu_egl_sync = EGL_NO_SYNC;
g_autofd int primary_gpu_sync_fence = EGL_NO_NATIVE_FENCE_FD_ANDROID;
COGL_TRACE_BEGIN_SCOPED (CopySharedFramebufferSecondaryGpu, COGL_TRACE_BEGIN_SCOPED (CopySharedFramebufferSecondaryGpu,
"copy_shared_framebuffer_gpu()"); "copy_shared_framebuffer_gpu()");
if (renderer_gpu_data->secondary.needs_explicit_sync) if (renderer_gpu_data->secondary.needs_explicit_sync)
cogl_framebuffer_finish (COGL_FRAMEBUFFER (onscreen)); {
if (!meta_egl_create_sync (egl,
cogl_renderer_egl->edpy,
EGL_SYNC_NATIVE_FENCE_ANDROID,
NULL,
&primary_gpu_egl_sync,
error))
{
g_prefix_error (error, "Failed to create EGLSync on primary GPU: ");
return NULL;
}
// According to the EGL_KHR_fence_sync specification we must ensure
// the fence command is flushed in this context to be able to await it
// in another (secondary GPU context) or we risk waiting indefinitely.
cogl_framebuffer_flush (COGL_FRAMEBUFFER (onscreen));
primary_gpu_sync_fence =
meta_egl_duplicate_native_fence_fd (egl,
cogl_renderer_egl->edpy,
primary_gpu_egl_sync,
error);
if (primary_gpu_sync_fence == EGL_NO_NATIVE_FENCE_FD_ANDROID)
{
g_prefix_error (error, "Failed to duplicate EGLSync FD on primary GPU: ");
goto done;
}
}
render_device = renderer_gpu_data->render_device; render_device = renderer_gpu_data->render_device;
egl_display = meta_render_device_get_egl_display (render_device); egl_display = meta_render_device_get_egl_display (render_device);
@ -872,6 +940,40 @@ copy_shared_framebuffer_gpu (CoglOnscreen *onscreen,
goto done; goto done;
} }
if (primary_gpu_sync_fence != EGL_NO_NATIVE_FENCE_FD_ANDROID)
{
EGLAttrib attribs[3];
attribs[0] = EGL_SYNC_NATIVE_FENCE_FD_ANDROID;
attribs[1] = primary_gpu_sync_fence;
attribs[2] = EGL_NONE;
if (!meta_egl_create_sync (egl,
egl_display,
EGL_SYNC_NATIVE_FENCE_ANDROID,
attribs,
&secondary_gpu_egl_sync,
error))
{
g_prefix_error (error, "Failed to create EGLSync on secondary GPU: ");
goto done;
}
// eglCreateSync takes ownership of an existing fd that is passed, so
// don't try to clean it up twice.
primary_gpu_sync_fence = EGL_NO_NATIVE_FENCE_FD_ANDROID;
if (!meta_egl_wait_sync (egl,
egl_display,
secondary_gpu_egl_sync,
0,
error))
{
g_prefix_error (error, "Failed to wait for EGLSync on secondary GPU: ");
goto done;
}
}
buffer_gbm = META_DRM_BUFFER_GBM (primary_gpu_fb); buffer_gbm = META_DRM_BUFFER_GBM (primary_gpu_fb);
bo = meta_drm_buffer_gbm_get_bo (buffer_gbm); bo = meta_drm_buffer_gbm_get_bo (buffer_gbm);
if (!meta_renderer_native_gles3_blit_shared_bo (egl, if (!meta_renderer_native_gles3_blit_shared_bo (egl,
@ -921,6 +1023,20 @@ copy_shared_framebuffer_gpu (CoglOnscreen *onscreen,
done: done:
_cogl_winsys_egl_ensure_current (cogl_display); _cogl_winsys_egl_ensure_current (cogl_display);
if (primary_gpu_egl_sync != EGL_NO_SYNC &&
!meta_egl_destroy_sync (egl,
cogl_renderer_egl->edpy,
primary_gpu_egl_sync,
error))
g_prefix_error (error, "Failed to destroy primary GPU EGLSync: ");
if (secondary_gpu_egl_sync != EGL_NO_SYNC &&
!meta_egl_destroy_sync (egl,
egl_display,
secondary_gpu_egl_sync,
error))
g_prefix_error (error, "Failed to destroy secondary GPU EGLSync: ");
return buffer_gbm ? META_DRM_BUFFER (buffer_gbm) : NULL; return buffer_gbm ? META_DRM_BUFFER (buffer_gbm) : NULL;
} }
@ -928,12 +1044,17 @@ static MetaDrmBufferDumb *
secondary_gpu_get_next_dumb_buffer (MetaOnscreenNativeSecondaryGpuState *secondary_gpu_state) secondary_gpu_get_next_dumb_buffer (MetaOnscreenNativeSecondaryGpuState *secondary_gpu_state)
{ {
MetaDrmBufferDumb *current_dumb_fb; MetaDrmBufferDumb *current_dumb_fb;
const int n_dumb_fbs = G_N_ELEMENTS (secondary_gpu_state->cpu.dumb_fbs);
int i;
current_dumb_fb = secondary_gpu_state->cpu.current_dumb_fb; current_dumb_fb = secondary_gpu_state->cpu.current_dumb_fb;
if (current_dumb_fb == secondary_gpu_state->cpu.dumb_fbs[0]) for (i = 0; i < n_dumb_fbs; i++)
return secondary_gpu_state->cpu.dumb_fbs[1]; {
else if (current_dumb_fb == secondary_gpu_state->cpu.dumb_fbs[i])
return secondary_gpu_state->cpu.dumb_fbs[0]; return secondary_gpu_state->cpu.dumb_fbs[(i + 1) % n_dumb_fbs];
}
return secondary_gpu_state->cpu.dumb_fbs[0];
} }
static MetaDrmBuffer * static MetaDrmBuffer *
@ -1269,10 +1390,36 @@ swap_buffer_result_feedback (const MetaKmsFeedback *kms_feedback,
g_warning ("Page flip failed: %s", error->message); g_warning ("Page flip failed: %s", error->message);
frame_info = cogl_onscreen_peek_head_frame_info (onscreen); frame_info = cogl_onscreen_peek_head_frame_info (onscreen);
frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC;
meta_onscreen_native_notify_frame_complete (onscreen); /* After resuming from suspend, drop_stalled_swap might have done this
meta_onscreen_native_clear_next_fb (onscreen); * already and emptied the frame_info queue.
*/
if (frame_info)
{
frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC;
meta_onscreen_native_notify_frame_complete (onscreen);
}
meta_onscreen_native_clear_posted_fb (onscreen);
}
static void
assign_next_frame (MetaOnscreenNative *onscreen_native,
ClutterFrame *frame)
{
CoglOnscreen *onscreen = COGL_ONSCREEN (onscreen_native);
if (onscreen_native->next_frame != NULL)
{
g_warn_if_fail (onscreen_native->stalled_frame == NULL);
drop_stalled_swap (onscreen);
g_warn_if_fail (onscreen_native->stalled_frame == NULL);
g_clear_pointer (&onscreen_native->stalled_frame, clutter_frame_unref);
onscreen_native->stalled_frame =
g_steal_pointer (&onscreen_native->next_frame);
}
onscreen_native->next_frame = clutter_frame_ref (frame);
} }
static const MetaKmsResultListenerVtable swap_buffer_result_listener_vtable = { static const MetaKmsResultListenerVtable swap_buffer_result_listener_vtable = {
@ -1292,35 +1439,37 @@ meta_onscreen_native_swap_buffers_with_damage (CoglOnscreen *onscreen,
CoglRendererEGL *cogl_renderer_egl = cogl_renderer->winsys; CoglRendererEGL *cogl_renderer_egl = cogl_renderer->winsys;
MetaRendererNativeGpuData *renderer_gpu_data = cogl_renderer_egl->platform; MetaRendererNativeGpuData *renderer_gpu_data = cogl_renderer_egl->platform;
MetaRendererNative *renderer_native = renderer_gpu_data->renderer_native; MetaRendererNative *renderer_native = renderer_gpu_data->renderer_native;
MetaRenderer *renderer = META_RENDERER (renderer_native);
MetaBackend *backend = meta_renderer_get_backend (renderer);
MetaMonitorManager *monitor_manager =
meta_backend_get_monitor_manager (backend);
MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen); MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen);
MetaOnscreenNativeSecondaryGpuState *secondary_gpu_state; MetaOnscreenNativeSecondaryGpuState *secondary_gpu_state;
MetaGpuKms *render_gpu = onscreen_native->render_gpu; MetaGpuKms *render_gpu = onscreen_native->render_gpu;
MetaDeviceFile *render_device_file; MetaDeviceFile *render_device_file;
ClutterFrame *frame = user_data; ClutterFrame *frame = user_data;
MetaFrameNative *frame_native = meta_frame_native_from_frame (frame); MetaFrameNative *frame_native = meta_frame_native_from_frame (frame);
MetaKmsUpdate *kms_update;
CoglOnscreenClass *parent_class; CoglOnscreenClass *parent_class;
gboolean create_timestamp_query = TRUE; gboolean create_timestamp_query = TRUE;
MetaPowerSave power_save_mode;
g_autoptr (GError) error = NULL; g_autoptr (GError) error = NULL;
MetaDrmBufferFlags buffer_flags; MetaDrmBufferFlags buffer_flags;
MetaDrmBufferGbm *buffer_gbm; MetaDrmBufferGbm *buffer_gbm;
g_autoptr (MetaDrmBuffer) primary_gpu_fb = NULL; g_autoptr (MetaDrmBuffer) primary_gpu_fb = NULL;
g_autoptr (MetaDrmBuffer) secondary_gpu_fb = NULL; g_autoptr (MetaDrmBuffer) secondary_gpu_fb = NULL;
g_autoptr (MetaDrmBuffer) buffer = NULL; g_autoptr (MetaDrmBuffer) buffer = NULL;
MetaKmsCrtc *kms_crtc;
MetaKmsDevice *kms_device;
int sync_fd;
COGL_TRACE_SCOPED_ANCHOR (MetaRendererNativePostKmsUpdate);
COGL_TRACE_BEGIN_SCOPED (MetaRendererNativeSwapBuffers, COGL_TRACE_BEGIN_SCOPED (MetaRendererNativeSwapBuffers,
"Meta::OnscreenNative::swap_buffers_with_damage()"); "Meta::OnscreenNative::swap_buffers_with_damage()");
if (meta_is_topic_enabled (META_DEBUG_KMS))
{
unsigned int frames_pending =
cogl_onscreen_get_pending_frame_count (onscreen);
meta_topic (META_DEBUG_KMS,
"Swap buffers: %u frames pending (%s-buffering)",
frames_pending,
frames_pending == 1 ? "double" :
frames_pending == 2 ? "triple" :
"?");
}
secondary_gpu_fb = secondary_gpu_fb =
update_secondary_gpu_state_pre_swap_buffers (onscreen, update_secondary_gpu_state_pre_swap_buffers (onscreen,
rectangles, rectangles,
@ -1402,15 +1551,86 @@ meta_onscreen_native_swap_buffers_with_damage (CoglOnscreen *onscreen,
#endif #endif
} }
g_warn_if_fail (!onscreen_native->next_frame); assign_next_frame (onscreen_native, frame);
onscreen_native->next_frame = clutter_frame_ref (frame);
kms_crtc = meta_crtc_kms_get_kms_crtc (META_CRTC_KMS (onscreen_native->crtc)); clutter_frame_set_result (frame,
kms_device = meta_kms_crtc_get_device (kms_crtc); CLUTTER_FRAME_RESULT_PENDING_PRESENTED);
meta_frame_native_set_damage (frame_native, rectangles, n_rectangles);
onscreen_native->swaps_pending++;
try_post_latest_swap (onscreen);
return;
swap_failed:
frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC;
meta_onscreen_native_notify_frame_complete (onscreen);
clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_IDLE);
}
static void
try_post_latest_swap (CoglOnscreen *onscreen)
{
CoglFramebuffer *framebuffer = COGL_FRAMEBUFFER (onscreen);
CoglContext *cogl_context = cogl_framebuffer_get_context (framebuffer);
CoglRenderer *cogl_renderer = cogl_context->display->renderer;
CoglRendererEGL *cogl_renderer_egl = cogl_renderer->winsys;
MetaRendererNativeGpuData *renderer_gpu_data = cogl_renderer_egl->platform;
MetaRendererNative *renderer_native = renderer_gpu_data->renderer_native;
MetaRenderer *renderer = META_RENDERER (renderer_native);
MetaBackend *backend = meta_renderer_get_backend (renderer);
MetaBackendNative *backend_native = META_BACKEND_NATIVE (backend);
MetaKms *kms = meta_backend_native_get_kms (backend_native);
MetaMonitorManager *monitor_manager =
meta_backend_get_monitor_manager (backend);
MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen);
MetaPowerSave power_save_mode;
MetaCrtcKms *crtc_kms = META_CRTC_KMS (onscreen_native->crtc);
MetaKmsCrtc *kms_crtc = meta_crtc_kms_get_kms_crtc (crtc_kms);
MetaKmsDevice *kms_device = meta_kms_crtc_get_device (kms_crtc);
MetaKmsUpdate *kms_update;
g_autoptr (MetaKmsFeedback) kms_feedback = NULL;
g_autoptr (ClutterFrame) frame = NULL;
MetaFrameNative *frame_native;
int sync_fd;
COGL_TRACE_SCOPED_ANCHOR (MetaRendererNativePostKmsUpdate);
if (onscreen_native->next_frame == NULL ||
onscreen_native->view == NULL ||
meta_kms_is_shutting_down (kms))
return;
power_save_mode = meta_monitor_manager_get_power_save_mode (monitor_manager); power_save_mode = meta_monitor_manager_get_power_save_mode (monitor_manager);
if (power_save_mode == META_POWER_SAVE_ON) if (power_save_mode == META_POWER_SAVE_ON)
{ {
unsigned int frames_pending =
cogl_onscreen_get_pending_frame_count (onscreen);
unsigned int posts_pending;
int n_rectangles;
int *rectangles;
g_assert (frames_pending >= onscreen_native->swaps_pending);
posts_pending = frames_pending - onscreen_native->swaps_pending;
if (posts_pending > 0)
return; /* wait for the next frame notification and then try again */
frame = clutter_frame_ref (onscreen_native->next_frame);
frame_native = meta_frame_native_from_frame (frame);
n_rectangles = meta_frame_native_get_damage (frame_native, &rectangles);
if (onscreen_native->swaps_pending == 0)
{
if (frame_native)
{
kms_update = meta_frame_native_steal_kms_update (frame_native);
if (kms_update)
post_finish_frame (onscreen_native, kms_update);
}
return;
}
drop_stalled_swap (onscreen);
onscreen_native->swaps_pending--;
kms_update = meta_frame_native_ensure_kms_update (frame_native, kms_update = meta_frame_native_ensure_kms_update (frame_native,
kms_device); kms_device);
meta_kms_update_add_result_listener (kms_update, meta_kms_update_add_result_listener (kms_update,
@ -1432,13 +1652,11 @@ meta_onscreen_native_swap_buffers_with_damage (CoglOnscreen *onscreen,
{ {
meta_renderer_native_queue_power_save_page_flip (renderer_native, meta_renderer_native_queue_power_save_page_flip (renderer_native,
onscreen); onscreen);
clutter_frame_set_result (frame,
CLUTTER_FRAME_RESULT_PENDING_PRESENTED);
return; return;
} }
COGL_TRACE_BEGIN_ANCHORED (MetaRendererNativePostKmsUpdate, COGL_TRACE_BEGIN_ANCHORED (MetaRendererNativePostKmsUpdate,
"Meta::OnscreenNative::swap_buffers_with_damage#post_pending_update()"); "Meta::OnscreenNative::try_post_latest_swap#post_pending_update()");
switch (renderer_gpu_data->mode) switch (renderer_gpu_data->mode)
{ {
@ -1453,8 +1671,6 @@ meta_onscreen_native_swap_buffers_with_damage (CoglOnscreen *onscreen,
kms_update = meta_frame_native_steal_kms_update (frame_native); kms_update = meta_frame_native_steal_kms_update (frame_native);
meta_renderer_native_queue_mode_set_update (renderer_native, meta_renderer_native_queue_mode_set_update (renderer_native,
kms_update); kms_update);
clutter_frame_set_result (frame,
CLUTTER_FRAME_RESULT_PENDING_PRESENTED);
return; return;
} }
else if (meta_renderer_native_has_pending_mode_set (renderer_native)) else if (meta_renderer_native_has_pending_mode_set (renderer_native))
@ -1468,8 +1684,6 @@ meta_onscreen_native_swap_buffers_with_damage (CoglOnscreen *onscreen,
meta_frame_native_steal_kms_update (frame_native); meta_frame_native_steal_kms_update (frame_native);
meta_renderer_native_post_mode_set_updates (renderer_native); meta_renderer_native_post_mode_set_updates (renderer_native);
clutter_frame_set_result (frame,
CLUTTER_FRAME_RESULT_PENDING_PRESENTED);
return; return;
} }
break; break;
@ -1485,8 +1699,6 @@ meta_onscreen_native_swap_buffers_with_damage (CoglOnscreen *onscreen,
kms_update); kms_update);
meta_renderer_native_post_mode_set_updates (renderer_native); meta_renderer_native_post_mode_set_updates (renderer_native);
clutter_frame_set_result (frame,
CLUTTER_FRAME_RESULT_PENDING_PRESENTED);
return; return;
} }
break; break;
@ -1503,13 +1715,6 @@ meta_onscreen_native_swap_buffers_with_damage (CoglOnscreen *onscreen,
meta_kms_update_set_sync_fd (kms_update, sync_fd); meta_kms_update_set_sync_fd (kms_update, sync_fd);
meta_kms_device_post_update (kms_device, kms_update, meta_kms_device_post_update (kms_device, kms_update,
META_KMS_UPDATE_FLAG_NONE); META_KMS_UPDATE_FLAG_NONE);
clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_PENDING_PRESENTED);
return;
swap_failed:
frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC;
meta_onscreen_native_notify_frame_complete (onscreen);
clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_IDLE);
} }
gboolean gboolean
@ -1577,11 +1782,11 @@ scanout_result_feedback (const MetaKmsFeedback *kms_feedback,
G_IO_ERROR_PERMISSION_DENIED)) G_IO_ERROR_PERMISSION_DENIED))
{ {
ClutterStageView *view = CLUTTER_STAGE_VIEW (onscreen_native->view); ClutterStageView *view = CLUTTER_STAGE_VIEW (onscreen_native->view);
ClutterFrame *next_frame = onscreen_native->next_frame; ClutterFrame *posted_frame = onscreen_native->posted_frame;
MetaFrameNative *next_frame_native = MetaFrameNative *posted_frame_native =
meta_frame_native_from_frame (next_frame); meta_frame_native_from_frame (posted_frame);
CoglScanout *scanout = CoglScanout *scanout =
meta_frame_native_get_scanout (next_frame_native); meta_frame_native_get_scanout (posted_frame_native);
g_warning ("Direct scanout page flip failed: %s", error->message); g_warning ("Direct scanout page flip failed: %s", error->message);
@ -1594,7 +1799,7 @@ scanout_result_feedback (const MetaKmsFeedback *kms_feedback,
frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC; frame_info->flags |= COGL_FRAME_INFO_FLAG_SYMBOLIC;
meta_onscreen_native_notify_frame_complete (onscreen); meta_onscreen_native_notify_frame_complete (onscreen);
meta_onscreen_native_clear_next_fb (onscreen); meta_onscreen_native_clear_posted_fb (onscreen);
} }
static const MetaKmsResultListenerVtable scanout_result_listener_vtable = { static const MetaKmsResultListenerVtable scanout_result_listener_vtable = {
@ -1646,13 +1851,24 @@ meta_onscreen_native_direct_scanout (CoglOnscreen *onscreen,
return FALSE; return FALSE;
} }
/* Our direct scanout frame counts as 1, so more than that means we would
* be jumping the queue (and post would fail).
*/
if (cogl_onscreen_get_pending_frame_count (onscreen) > 1)
{
g_set_error_literal (error,
COGL_SCANOUT_ERROR,
COGL_SCANOUT_ERROR_INHIBITED,
"Direct scanout is inhibited during triple buffering");
return FALSE;
}
renderer_gpu_data = meta_renderer_native_get_gpu_data (renderer_native, renderer_gpu_data = meta_renderer_native_get_gpu_data (renderer_native,
render_gpu); render_gpu);
g_warn_if_fail (renderer_gpu_data->mode == META_RENDERER_NATIVE_MODE_GBM); g_warn_if_fail (renderer_gpu_data->mode == META_RENDERER_NATIVE_MODE_GBM);
g_warn_if_fail (!onscreen_native->next_frame); assign_next_frame (onscreen_native, frame);
onscreen_native->next_frame = clutter_frame_ref (frame);
meta_frame_native_set_scanout (frame_native, scanout); meta_frame_native_set_scanout (frame_native, scanout);
meta_frame_native_set_buffer (frame_native, meta_frame_native_set_buffer (frame_native,
@ -1899,22 +2115,79 @@ meta_onscreen_native_finish_frame (CoglOnscreen *onscreen,
MetaKmsDevice *kms_device = meta_kms_crtc_get_device (kms_crtc); MetaKmsDevice *kms_device = meta_kms_crtc_get_device (kms_crtc);
MetaFrameNative *frame_native = meta_frame_native_from_frame (frame); MetaFrameNative *frame_native = meta_frame_native_from_frame (frame);
MetaKmsUpdate *kms_update; MetaKmsUpdate *kms_update;
unsigned int frames_pending = cogl_onscreen_get_pending_frame_count (onscreen);
unsigned int swaps_pending = onscreen_native->swaps_pending;
unsigned int posts_pending = frames_pending - swaps_pending;
kms_update = meta_frame_native_steal_kms_update (frame_native); onscreen_native->needs_flush |= meta_kms_device_handle_flush (kms_device,
if (!kms_update) kms_crtc);
if (!meta_frame_native_has_kms_update (frame_native))
{ {
if (meta_kms_device_handle_flush (kms_device, kms_crtc)) if (!onscreen_native->needs_flush || posts_pending)
{
kms_update = meta_kms_update_new (kms_device);
meta_kms_update_set_flushing (kms_update, kms_crtc);
}
else
{ {
clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_IDLE); clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_IDLE);
return; return;
} }
} }
if (posts_pending && !swaps_pending)
{
g_return_if_fail (meta_frame_native_has_kms_update (frame_native));
g_warn_if_fail (onscreen_native->next_frame == NULL);
g_clear_pointer (&onscreen_native->next_frame, clutter_frame_unref);
onscreen_native->next_frame = clutter_frame_ref (frame);
clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_PENDING_PRESENTED);
return;
}
kms_update = meta_frame_native_steal_kms_update (frame_native);
if (posts_pending && swaps_pending)
{
MetaFrameNative *older_frame_native;
MetaKmsUpdate *older_kms_update;
g_return_if_fail (kms_update);
g_return_if_fail (onscreen_native->next_frame != NULL);
older_frame_native =
meta_frame_native_from_frame (onscreen_native->next_frame);
older_kms_update =
meta_frame_native_ensure_kms_update (older_frame_native, kms_device);
meta_kms_update_merge_from (older_kms_update, kms_update);
meta_kms_update_free (kms_update);
clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_IDLE);
return;
}
if (!kms_update)
{
kms_update = meta_kms_update_new (kms_device);
g_warn_if_fail (onscreen_native->needs_flush);
}
if (onscreen_native->needs_flush)
{
meta_kms_update_set_flushing (kms_update, kms_crtc);
onscreen_native->needs_flush = FALSE;
}
post_finish_frame (onscreen_native, kms_update);
clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_PENDING_PRESENTED);
}
static void
post_finish_frame (MetaOnscreenNative *onscreen_native,
MetaKmsUpdate *kms_update)
{
MetaCrtc *crtc = onscreen_native->crtc;
MetaKmsCrtc *kms_crtc = meta_crtc_kms_get_kms_crtc (META_CRTC_KMS (crtc));
MetaKmsDevice *kms_device = meta_kms_crtc_get_device (kms_crtc);
g_autoptr (MetaKmsFeedback) kms_feedback = NULL;
meta_kms_update_add_result_listener (kms_update, meta_kms_update_add_result_listener (kms_update,
&finish_frame_result_listener_vtable, &finish_frame_result_listener_vtable,
NULL, NULL,
@ -1937,7 +2210,17 @@ meta_onscreen_native_finish_frame (CoglOnscreen *onscreen,
meta_kms_update_set_flushing (kms_update, kms_crtc); meta_kms_update_set_flushing (kms_update, kms_crtc);
meta_kms_device_post_update (kms_device, kms_update, meta_kms_device_post_update (kms_device, kms_update,
META_KMS_UPDATE_FLAG_NONE); META_KMS_UPDATE_FLAG_NONE);
clutter_frame_set_result (frame, CLUTTER_FRAME_RESULT_PENDING_PRESENTED); }
void
meta_onscreen_native_discard_pending_swaps (CoglOnscreen *onscreen)
{
MetaOnscreenNative *onscreen_native = META_ONSCREEN_NATIVE (onscreen);
onscreen_native->swaps_pending = 0;
g_clear_pointer (&onscreen_native->stalled_frame, clutter_frame_unref);
g_clear_pointer (&onscreen_native->next_frame, clutter_frame_unref);
} }
static gboolean static gboolean
@ -2842,6 +3125,8 @@ meta_onscreen_native_dispose (GObject *object)
meta_onscreen_native_detach (onscreen_native); meta_onscreen_native_detach (onscreen_native);
g_clear_pointer (&onscreen_native->next_frame, clutter_frame_unref); g_clear_pointer (&onscreen_native->next_frame, clutter_frame_unref);
g_clear_pointer (&onscreen_native->stalled_frame, clutter_frame_unref);
g_clear_pointer (&onscreen_native->posted_frame, clutter_frame_unref);
g_clear_pointer (&onscreen_native->presented_frame, clutter_frame_unref); g_clear_pointer (&onscreen_native->presented_frame, clutter_frame_unref);
renderer_gpu_data = renderer_gpu_data =

View file

@ -48,6 +48,8 @@ void meta_onscreen_native_dummy_power_save_page_flip (CoglOnscreen *onscreen);
gboolean meta_onscreen_native_is_buffer_scanout_compatible (CoglOnscreen *onscreen, gboolean meta_onscreen_native_is_buffer_scanout_compatible (CoglOnscreen *onscreen,
CoglScanout *scanout); CoglScanout *scanout);
void meta_onscreen_native_discard_pending_swaps (CoglOnscreen *onscreen);
void meta_onscreen_native_set_view (CoglOnscreen *onscreen, void meta_onscreen_native_set_view (CoglOnscreen *onscreen,
MetaRendererView *view); MetaRendererView *view);

View file

@ -732,12 +732,18 @@ static gboolean
dummy_power_save_page_flip_cb (gpointer user_data) dummy_power_save_page_flip_cb (gpointer user_data)
{ {
MetaRendererNative *renderer_native = user_data; MetaRendererNative *renderer_native = user_data;
GList *old_list =
g_steal_pointer (&renderer_native->power_save_page_flip_onscreens);
g_list_foreach (renderer_native->power_save_page_flip_onscreens, g_list_foreach (old_list,
(GFunc) meta_onscreen_native_dummy_power_save_page_flip, (GFunc) meta_onscreen_native_dummy_power_save_page_flip,
NULL); NULL);
g_clear_list (&renderer_native->power_save_page_flip_onscreens, g_clear_list (&old_list,
g_object_unref); g_object_unref);
if (renderer_native->power_save_page_flip_onscreens != NULL)
return G_SOURCE_CONTINUE;
renderer_native->power_save_page_flip_source_id = 0; renderer_native->power_save_page_flip_source_id = 0;
return G_SOURCE_REMOVE; return G_SOURCE_REMOVE;
@ -749,6 +755,9 @@ meta_renderer_native_queue_power_save_page_flip (MetaRendererNative *renderer_na
{ {
const unsigned int timeout_ms = 100; const unsigned int timeout_ms = 100;
if (g_list_find (renderer_native->power_save_page_flip_onscreens, onscreen))
return;
if (!renderer_native->power_save_page_flip_source_id) if (!renderer_native->power_save_page_flip_source_id)
{ {
renderer_native->power_save_page_flip_source_id = renderer_native->power_save_page_flip_source_id =
@ -1497,6 +1506,26 @@ detach_onscreens (MetaRenderer *renderer)
} }
} }
static void
discard_pending_swaps (MetaRenderer *renderer)
{
GList *views = meta_renderer_get_views (renderer);;
GList *l;
for (l = views; l; l = l->next)
{
ClutterStageView *stage_view = l->data;
CoglFramebuffer *fb = clutter_stage_view_get_onscreen (stage_view);
CoglOnscreen *onscreen;
if (!COGL_IS_ONSCREEN (fb))
continue;
onscreen = COGL_ONSCREEN (fb);
meta_onscreen_native_discard_pending_swaps (onscreen);
}
}
static void static void
meta_renderer_native_rebuild_views (MetaRenderer *renderer) meta_renderer_native_rebuild_views (MetaRenderer *renderer)
{ {
@ -1507,6 +1536,7 @@ meta_renderer_native_rebuild_views (MetaRenderer *renderer)
MetaRendererClass *parent_renderer_class = MetaRendererClass *parent_renderer_class =
META_RENDERER_CLASS (meta_renderer_native_parent_class); META_RENDERER_CLASS (meta_renderer_native_parent_class);
discard_pending_swaps (renderer);
meta_kms_discard_pending_page_flips (kms); meta_kms_discard_pending_page_flips (kms);
g_hash_table_remove_all (renderer_native->mode_set_updates); g_hash_table_remove_all (renderer_native->mode_set_updates);

View file

@ -39,6 +39,8 @@
#include "tests/meta-wayland-test-driver.h" #include "tests/meta-wayland-test-driver.h"
#include "tests/meta-wayland-test-utils.h" #include "tests/meta-wayland-test-utils.h"
#define N_FRAMES_PER_TEST 30
typedef struct typedef struct
{ {
int number_of_frames_left; int number_of_frames_left;
@ -46,12 +48,15 @@ typedef struct
struct { struct {
int n_paints; int n_paints;
uint32_t fb_id; int n_presentations;
int n_direct_scanouts;
GList *fb_ids;
} scanout; } scanout;
gboolean wait_for_scanout; gboolean wait_for_scanout;
struct { struct {
int scanouts_attempted;
gboolean scanout_sabotaged; gboolean scanout_sabotaged;
gboolean fallback_painted; gboolean fallback_painted;
guint repaint_guard_id; guint repaint_guard_id;
@ -101,7 +106,7 @@ meta_test_kms_render_basic (void)
gulong handler_id; gulong handler_id;
test = (KmsRenderingTest) { test = (KmsRenderingTest) {
.number_of_frames_left = 10, .number_of_frames_left = N_FRAMES_PER_TEST,
.loop = g_main_loop_new (NULL, FALSE), .loop = g_main_loop_new (NULL, FALSE),
}; };
handler_id = g_signal_connect (stage, "after-update", handler_id = g_signal_connect (stage, "after-update",
@ -123,7 +128,6 @@ on_scanout_before_update (ClutterStage *stage,
KmsRenderingTest *test) KmsRenderingTest *test)
{ {
test->scanout.n_paints = 0; test->scanout.n_paints = 0;
test->scanout.fb_id = 0;
} }
static void static void
@ -135,6 +139,7 @@ on_scanout_before_paint (ClutterStage *stage,
CoglScanout *scanout; CoglScanout *scanout;
CoglScanoutBuffer *scanout_buffer; CoglScanoutBuffer *scanout_buffer;
MetaDrmBuffer *buffer; MetaDrmBuffer *buffer;
uint32_t fb_id;
scanout = clutter_stage_view_peek_scanout (stage_view); scanout = clutter_stage_view_peek_scanout (stage_view);
if (!scanout) if (!scanout)
@ -143,8 +148,13 @@ on_scanout_before_paint (ClutterStage *stage,
scanout_buffer = cogl_scanout_get_buffer (scanout); scanout_buffer = cogl_scanout_get_buffer (scanout);
g_assert_true (META_IS_DRM_BUFFER (scanout_buffer)); g_assert_true (META_IS_DRM_BUFFER (scanout_buffer));
buffer = META_DRM_BUFFER (scanout_buffer); buffer = META_DRM_BUFFER (scanout_buffer);
test->scanout.fb_id = meta_drm_buffer_get_fb_id (buffer); fb_id = meta_drm_buffer_get_fb_id (buffer);
g_assert_cmpuint (test->scanout.fb_id, >, 0); g_assert_cmpuint (fb_id, >, 0);
test->scanout.fb_ids = g_list_append (test->scanout.fb_ids,
GUINT_TO_POINTER (fb_id));
/* Triple buffering, but no higher */
g_assert_cmpuint (g_list_length (test->scanout.fb_ids), <=, 2);
} }
static void static void
@ -173,12 +183,12 @@ on_scanout_presented (ClutterStage *stage,
MetaDeviceFile *device_file; MetaDeviceFile *device_file;
GError *error = NULL; GError *error = NULL;
drmModeCrtc *drm_crtc; drmModeCrtc *drm_crtc;
uint32_t first_fb_id_expected;
if (test->wait_for_scanout && test->scanout.n_paints > 0) if (test->wait_for_scanout && test->scanout.fb_ids == NULL)
return; return;
if (test->wait_for_scanout && test->scanout.fb_id == 0) test->scanout.n_presentations++;
return;
device_pool = meta_backend_native_get_device_pool (backend_native); device_pool = meta_backend_native_get_device_pool (backend_native);
@ -197,15 +207,41 @@ on_scanout_presented (ClutterStage *stage,
drm_crtc = drmModeGetCrtc (meta_device_file_get_fd (device_file), drm_crtc = drmModeGetCrtc (meta_device_file_get_fd (device_file),
meta_kms_crtc_get_id (kms_crtc)); meta_kms_crtc_get_id (kms_crtc));
g_assert_nonnull (drm_crtc); g_assert_nonnull (drm_crtc);
if (test->scanout.fb_id == 0)
g_assert_cmpuint (drm_crtc->buffer_id, !=, test->scanout.fb_id); if (test->scanout.fb_ids)
{
test->scanout.n_direct_scanouts++;
first_fb_id_expected = GPOINTER_TO_UINT (test->scanout.fb_ids->data);
test->scanout.fb_ids = g_list_delete_link (test->scanout.fb_ids,
test->scanout.fb_ids);
}
else else
g_assert_cmpuint (drm_crtc->buffer_id, ==, test->scanout.fb_id); {
first_fb_id_expected = 0;
}
/* The buffer ID won't match on the first frame because switching from
* triple buffered compositing to double buffered direct scanout takes
* an extra frame to drain the queue. Thereafter we are in direct scanout
* mode and expect the buffer IDs to match.
*/
if (test->scanout.n_presentations > 1)
{
if (first_fb_id_expected == 0)
g_assert_cmpuint (drm_crtc->buffer_id, !=, first_fb_id_expected);
else
g_assert_cmpuint (drm_crtc->buffer_id, ==, first_fb_id_expected);
}
drmModeFreeCrtc (drm_crtc); drmModeFreeCrtc (drm_crtc);
meta_device_file_release (device_file); meta_device_file_release (device_file);
g_main_loop_quit (test->loop); test->number_of_frames_left--;
if (test->number_of_frames_left <= 0)
g_main_loop_quit (test->loop);
else
clutter_actor_queue_redraw (CLUTTER_ACTOR (stage));
} }
typedef enum typedef enum
@ -244,7 +280,9 @@ meta_test_kms_render_client_scanout (void)
g_assert_nonnull (wayland_test_client); g_assert_nonnull (wayland_test_client);
test = (KmsRenderingTest) { test = (KmsRenderingTest) {
.number_of_frames_left = N_FRAMES_PER_TEST,
.loop = g_main_loop_new (NULL, FALSE), .loop = g_main_loop_new (NULL, FALSE),
.scanout = {0},
.wait_for_scanout = TRUE, .wait_for_scanout = TRUE,
}; };
@ -270,7 +308,8 @@ meta_test_kms_render_client_scanout (void)
clutter_actor_queue_redraw (CLUTTER_ACTOR (stage)); clutter_actor_queue_redraw (CLUTTER_ACTOR (stage));
g_main_loop_run (test.loop); g_main_loop_run (test.loop);
g_assert_cmpuint (test.scanout.fb_id, >, 0); g_assert_cmpint (test.scanout.n_presentations, ==, N_FRAMES_PER_TEST);
g_assert_cmpint (test.scanout.n_direct_scanouts, ==, N_FRAMES_PER_TEST);
g_debug ("Unmake fullscreen"); g_debug ("Unmake fullscreen");
window = meta_find_window_from_title (test_context, "dma-buf-scanout-test"); window = meta_find_window_from_title (test_context, "dma-buf-scanout-test");
@ -292,10 +331,15 @@ meta_test_kms_render_client_scanout (void)
g_assert_cmpint (buffer_rect.y, ==, 10); g_assert_cmpint (buffer_rect.y, ==, 10);
test.wait_for_scanout = FALSE; test.wait_for_scanout = FALSE;
test.number_of_frames_left = N_FRAMES_PER_TEST;
test.scanout.n_presentations = 0;
test.scanout.n_direct_scanouts = 0;
clutter_actor_queue_redraw (CLUTTER_ACTOR (stage)); clutter_actor_queue_redraw (CLUTTER_ACTOR (stage));
g_main_loop_run (test.loop); g_main_loop_run (test.loop);
g_assert_cmpuint (test.scanout.fb_id, ==, 0); g_assert_cmpint (test.scanout.n_presentations, ==, N_FRAMES_PER_TEST);
g_assert_cmpint (test.scanout.n_direct_scanouts, ==, 0);
g_debug ("Moving back to 0, 0"); g_debug ("Moving back to 0, 0");
meta_window_move_frame (window, TRUE, 0, 0); meta_window_move_frame (window, TRUE, 0, 0);
@ -307,10 +351,15 @@ meta_test_kms_render_client_scanout (void)
g_assert_cmpint (buffer_rect.y, ==, 0); g_assert_cmpint (buffer_rect.y, ==, 0);
test.wait_for_scanout = TRUE; test.wait_for_scanout = TRUE;
test.number_of_frames_left = N_FRAMES_PER_TEST;
test.scanout.n_presentations = 0;
test.scanout.n_direct_scanouts = 0;
clutter_actor_queue_redraw (CLUTTER_ACTOR (stage)); clutter_actor_queue_redraw (CLUTTER_ACTOR (stage));
g_main_loop_run (test.loop); g_main_loop_run (test.loop);
g_assert_cmpuint (test.scanout.fb_id, >, 0); g_assert_cmpint (test.scanout.n_presentations, ==, N_FRAMES_PER_TEST);
g_assert_cmpint (test.scanout.n_direct_scanouts, ==, N_FRAMES_PER_TEST);
g_signal_handler_disconnect (stage, before_update_handler_id); g_signal_handler_disconnect (stage, before_update_handler_id);
g_signal_handler_disconnect (stage, before_paint_handler_id); g_signal_handler_disconnect (stage, before_paint_handler_id);
@ -364,6 +413,15 @@ on_scanout_fallback_before_paint (ClutterStage *stage,
if (!scanout) if (!scanout)
return; return;
test->scanout_fallback.scanouts_attempted++;
/* The first scanout candidate frame will get composited due to triple
* buffering draining the queue to drop to double buffering. So don't
* sabotage that first frame.
*/
if (test->scanout_fallback.scanouts_attempted < 2)
return;
g_assert_false (test->scanout_fallback.scanout_sabotaged); g_assert_false (test->scanout_fallback.scanout_sabotaged);
if (is_atomic_mode_setting (kms_device)) if (is_atomic_mode_setting (kms_device))
@ -401,6 +459,15 @@ on_scanout_fallback_paint_view (ClutterStage *stage,
g_clear_handle_id (&test->scanout_fallback.repaint_guard_id, g_clear_handle_id (&test->scanout_fallback.repaint_guard_id,
g_source_remove); g_source_remove);
test->scanout_fallback.fallback_painted = TRUE; test->scanout_fallback.fallback_painted = TRUE;
test->scanout_fallback.scanout_sabotaged = FALSE;
}
else if (test->scanout_fallback.scanouts_attempted == 1)
{
/* Now that we've seen the first scanout attempt that was inhibited by
* triple buffering, try a second frame. The second one should scanout
* and will be sabotaged.
*/
clutter_actor_queue_redraw (CLUTTER_ACTOR (stage));
} }
} }
@ -410,11 +477,11 @@ on_scanout_fallback_presented (ClutterStage *stage,
ClutterFrameInfo *frame_info, ClutterFrameInfo *frame_info,
KmsRenderingTest *test) KmsRenderingTest *test)
{ {
if (!test->scanout_fallback.scanout_sabotaged) if (test->scanout_fallback.fallback_painted)
return; g_main_loop_quit (test->loop);
g_assert_true (test->scanout_fallback.fallback_painted); test->number_of_frames_left--;
g_main_loop_quit (test->loop); g_assert_cmpint (test->number_of_frames_left, >, 0);
} }
static void static void
@ -443,6 +510,7 @@ meta_test_kms_render_client_scanout_fallback (void)
g_assert_nonnull (wayland_test_client); g_assert_nonnull (wayland_test_client);
test = (KmsRenderingTest) { test = (KmsRenderingTest) {
.number_of_frames_left = N_FRAMES_PER_TEST,
.loop = g_main_loop_new (NULL, FALSE), .loop = g_main_loop_new (NULL, FALSE),
}; };