1
0
Fork 0
Commit graph

2791 commits

Author SHA1 Message Date
Neil Roberts
b34034217a Make it possible to call swap_buffers within a frame event callback
It seems like it would be quite a reasonable design for an application
to immediately paint the buffer and call swap_buffers within the
handler for the sync event. This previously wouldn't work.

When using the GLX winsys if swap_region is called then it immediately
tries to set the pending notification flag. However if this is called
from the event callback then when the callback is complete it will
clear the flag again and the pending notification will be lost. This
patch just makes it clear the pending flag before invoking the
callback so that it can be safely queued again.

With any winsys that doesn't directly handle the sync event
notification it would almost work except that it was iterating the
live list of pending events. If the callback causes another event to
be added to this list by issuing a buffer swap then the iteration
would never complete and cogl_poll_dispatch would never return. This
patch just makes it steal the list before iterating so that any
additions will be dispatched by a later call to cogl_poll_dispatch
instead.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 2263b31594900b73900d2ce22cf70c68e7e793c6)
2013-02-03 10:07:47 +01:00
Robert Bragg
8a1353a215 This reverts the first hunk from Jerome's last patch
The first hunk from commit 93b7b4c850dd928bf21ee168a95641a8d631f713
turned out to be redundant because GLX guarantees that configs returned
by glXChooseFBConfig should be sorted with non msaa configs coming
first. The second hunk is required since we use glXGetFBConfigs in that
case which doesn't sort the configs.

I had meant to drop this part of the patch before landing it but forgot.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit b19fcc1869275826e952925af922125daf8a48de)
2013-01-31 21:12:36 +00:00
Neil Roberts
4efd82a3b6 Convert the two SDL examples to use the frame callback
The two SDL examples now throttle their rendering to the
COGL_FRAME_EVENT_SYNC event. Previously the examples would redraw
whenever a mouse motion event is received but now they additionally
wait for the sync event which means that if another mouse event comes
immediately after rendering the last frame it theoretically could
avoid blocking waiting for the last frame to complete. In practice
however the SDL winsys doesn't support swap events so it will get the
sync event immediately anyway, but it's nice to have the code as an
example and a test.

This patch also changes the mainloop a bit to do the equivalent steps
without the outer main loop which I think makes it a bit easier to
follow.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 97cdd832dded2ebfaa42ee4bc43319cb8648d01b)
2013-01-31 16:56:08 +00:00
Jerome Glisse
1f84b5c9b4 glx do not use multisample visual config for front or pixmap
There is no guaranty that glXGetFBConfigs will return fbconfig ordered
with non msaa config first. This patch make sure that non msaa config
get choose.

Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 93b7b4c850dd928bf21ee168a95641a8d631f713)
2013-01-31 12:28:04 +00:00
Owen W. Taylor
98e3b57d0d Add cogl_get_clock_time()
Add an API to get the current time in the time system that Cogl
is reporting timestamps. This is to be used to convert timestamps
into a different time system.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 9f3735a0c37adcfcffa485f81699b53a4cc0caf8)
2013-01-30 20:09:49 +00:00
Robert Bragg
d12f39d0e6 cogl-crate: use new _add_frame_callback api
This updates cogl-crate to use the new
cogl_onscreen_add_frame_callback() api to use _SYNC events for
throttling.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 47ea52774025b620258e00a32cb873674d0fc721)
2013-01-30 20:09:49 +00:00
Robert Bragg
51c1b3fbff cogl-hello: use new _add_frame_callback api
This updates cogl-hello to use the new
cogl_onscreen_add_frame_callback() api to use _SYNC events for
throttling.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 06d3cb0d99944e0150e30d553d248feb5f049000)
2013-01-30 20:09:49 +00:00
Owen W. Taylor
24733abf68 onscreen: Add CoglFrameInfo and _add_frame_callback() api
Add a CoglFrameInfo object that tracks timing information for frames
that are drawn. We track a frame counter and frame timing information
for each CoglOnscreen. Internally a CoglFrameInfo is automatically
created for each frame, delimited by cogl_onscreen_swap_buffers() or
cogl_onscreen_swap_region() calls.

CoglFrameInfos are delivered to applications via frame event callbacks
that can be registered with a new cogl_onscreen_add_frame_callback()
api. Two initial event types (dispatched on all platforms) have been
defined; a _SYNC event used for throttling the frame rate of
applications and a _COMPLETE event used so signify the end of a frame.

Note: This new _add_frame_callback() api makes the
cogl_onscreen_add_swap_complete_callback() api redundant and so it
should be considered deprecated. Since the _add_swap_complete_callback()
api is still experimental api, we will be looking to quickly migrate
users to the new api so we can remove the old api.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 700401667db2522045e4623d78797b17f9184501)
2013-01-30 20:09:49 +00:00
Owen W. Taylor
5ce058c0e5 Prefer OML_sync_control over SGI_video_sync when waiting for swap
When we block waiting for the swap, prefer doing that using
glXWaitForMsc() from OML_sync_control because that returns a system
time value for the precise time of the swap.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 1e8114aabc78b90373d3d5f3f7c0224f8786e399)
2013-01-30 20:09:40 +00:00
Robert Bragg
013548c109 renderer: expose CoglOutputs
This adds a cogl_renderer_foreach_output() function that can be used to
iterate the display outputs for a particular renderer.

This also updates cogl-info to use this new api so it can dump out all
the output information.

Reviewed-by: Owen W. Taylor <otaylor@fishsoup.net>

(cherry picked from commit a2abf4c4c1fd5aeafd761f965d07a0fe9a362afc)
2013-01-30 19:57:22 +00:00
Owen W. Taylor
88d8bd84f2 Add CoglOutput and track for the GLX backend
The CoglOutput object represents one output such as a monitor or
laptop panel, with information about attributes of the output such as
the position of the output within the global coordinate space, and
the refresh rate.

We don't yet publically export the ability to get output information but
we track it for the GLX backend, where we'll use it to track the refresh
rate.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit d7ef9d8d71488d0e6874f1ffc6e48700d5c82a31)
2013-01-30 19:56:45 +00:00
Neil Roberts
23eca5c793 Support cogl_renderer_get_n_fragment_texture_units() for ARBfp
There is a cogl_renderer_get_n_fragment_texture_units() function which
is documented to return the number of texture units that are
accessible from a fragment program. This just directly returns the
value from GL_MAX_TEXTURE_IMAGE_UNITS which is available in either the
GLSL extensions or the ARBfp extension. Clutter-GST relies on this to
determine whether it can use a program to convert the YUV data on the
GPU.

When the GL3 driver was added in 66c9db993595b this was changed to
only query the value when the GLSL feature is available. Previously it
would always query the value when the GL or GLES2 driver is used. This
change makes sense on master because there is no API for an
application to make its own ARBfp programs so the only way to access
texture units from a program is via GLSL. However on the 1.14 branch
this patch broke clutter-gst when GLSL is disabled because it thinks
the ARBfp programs can't use multi-texturing.

This patch just changes it to also query the value when ARBfp support
is available.

Note: it's probably note a good idea to apply this patch to master,
but only to the 1.14 branch. On master the function probably needs to
be changed anyway because it is using _COGL_GET_CONTEXT().

Reviewed-by: Robert Bragg <robert@linux.intel.com>
2013-01-30 14:44:53 +00:00
Gheyret Kenji
d3f084d4a1 Updated Uyghur translation
Signed-off-by: Gheyret Kenji <gheyret@gmail.com>
2013-01-30 19:26:17 +09:00
Neil Roberts
1e00ff268e Bind the dummy surface or drawable when current onscreen is destroyed
Similar to commit 2c0cfdefbb9d1 for the SDL2 winsys, the GLX and EGL
window systems need to bind the dummy surface or drawable when the
currently bound onscreen is destroyed so that there will always be a
valid context bound.

Previously I got the idea that this would not be necessary on GLX
because the documentation for glXDestroyDrawable states that the
drawable won't actually be destroyed if it is currently bound until it
becomes unbound. However it doesn't say what happens if the underlying
X window is also destroyed and after testing it seems this causes a
segfault in Mesa in GLX and an XError for EGLX.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 4a464eec8c5b5832b9fd6b69746ab4ab36229182)
2013-01-25 18:21:09 +00:00
Neil Roberts
da7971f6be Don't set GL_TEXTURE_MAX_LEVEL on GLES
GL_TEXTURE_MAX_LEVEL is not supported on GLES so we can't set it. It
looks like Mesa was letting us get away with this but on other drivers
it may cause errors. The enum is not defined in the GLES headers so it
was failing to compile unless the GL driver is also enabled.

The test-texture-mipmap-get-set test is now marked as n/a on GLES2
because it can't support limiting the sampled mipmaps.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit ba51c393818582b058f5f1e66cf8d13835ad10e5)

Conflicts:
	tests/conform/test-conform-main.c
2013-01-25 18:21:09 +00:00
Neil Roberts
9a242832dc Add some defines that are missing on GLES
The GLES2 driver wasn't compiling unless the GL driver is also enabled
because some run-time conditional code was directly using GL-only
defines.

This should also fix compiling using the stock GL headers on OS X
which don't define GL_NUM_EXTENSIONS.

https://bugzilla.gnome.org/show_bug.cgi?id=692420

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 661e1719aa0b95c409c568ec91ea52b8ff90519b)
2013-01-25 18:21:09 +00:00
Neil Roberts
a749c7c1ab Query rectangle tex parameters when creating a foreign texture on GL3
Previously when creating a foreign rectangle texture it would ignore
the passed in texture information and query the texture directly when
using COGL_DRIVER_GL. However this should also work for
COGL_DRIVER_GL3. This patch changes it to check the private feature
flags for the texture querying feature instead of directly checking
the driver value.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 258c98b82027cb5074afe7844ff3954bbe928757)
2013-01-25 18:21:09 +00:00
Neil Roberts
5d6160c751 Replace some #if HAVE_COGL_GL lines with #ifdef
This was generating warnings when the GL driver is disabled.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit f26682dcc04642fed9db959c63d6c6e4261d2148)

Conflicts:
	cogl/cogl-auto-texture.c
2013-01-25 18:21:09 +00:00
Piotr Drąg
795c3fd020 l10n: Add po/POTFILES.skip file
And include the internal copy of glib here to shut up intltool.
2013-01-24 18:36:53 +01:00
Robert Bragg
d521b61a49 egl: support EGL_EXT_buffer_age
This adds support for the EGL_EXT_buffer_age extension which is a
counterpart to the GLX_EXT_buffer_age extension.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 92d869764c03d0bac6b51dac833510c22669ac4a)
2013-01-23 17:58:20 +00:00
Adel Gadllah
860fb00fdc cogl-onscreen: Add buffer_age support
Add a new BUFFER_AGE winsys feature and a get_buffer_age method to
cogl-onscreen that allows to query the value.

https://bugzilla.gnome.org/show_bug.cgi?id=669122

Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>

Note: When landing the patch I made some gtk-doc updates and changed
_get_buffer_age to return an age of 0 always if the age feature isn't
support instead of using _COGL_RETURN_VAL_IF_FAIL. -- Robert Bragg

(cherry picked from commit 427b1038051e9b53a071d8c229b363b075bb1dc0)
2013-01-23 17:58:10 +00:00
Adam Jackson
9fb0cbd45d meta-texture: Fix nonsensical <= on pointers
Comparing the pointed-to value is clearly what was meant.

Found by Coverity.

Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit f676352210fad856ae85962733e488bc1a832411)
2013-01-22 20:11:25 +00:00
Robert Bragg
78bea226d4 Post-release version bump to 1.13.3 2013-01-22 18:44:33 +00:00
Robert Bragg
b33b41e7ab Release 1.13.2 (snapshot) 2013-01-22 18:00:12 +00:00
Robert Bragg
24b064abf7 Updates NEWS for the 1.13.2 release 2013-01-22 18:00:11 +00:00
Patrick Welche
8c319e4bc1 Remove vestiges of libdl / dlfcn.h as cogl uses gmodule.
https://bugzilla.gnome.org/show_bug.cgi?id=691944

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 397e673446e86a9116cb7396ea094e9f8b46986e)
2013-01-22 18:00:11 +00:00
Neil Roberts
9a8a26270c test-write-texture-formats: Add fuzziness to the pixel comparisons
The rounding used when storing 10-bit per component data into an 8-bit
per component texture seems to have changed in recent versions of Mesa
which was causing this test to fail. I've also noticed this failing on
the NVidia binary driver. This patch adds some fuzziness to the
comparison so that it will pass. There is a new test_utils function
called test_utils_compare_pixel_and_alpha which is the same as
test_utils_compare_pixel except that it also compares the alpha
component.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit ce626fb3939b0f200d85ccdf32809608b879212d)
2013-01-22 18:00:11 +00:00
Neil Roberts
364f232507 tests: Mark test_framebuffer_get_bits as only working on GL
It looks like it's not meant to be valid to create a framebuffer with
an alpha-only texture as a render target on GLES. Since the following
Mesa commit, this requirement is now enforced so that the
test_framebuffer_get_bits test fails:

http://cgit.freedesktop.org/mesa/mesa/commit/?id=cf300eaa

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit cfb0859cab843b000f4baa3ca155a245730edcfa)
2013-01-22 18:00:11 +00:00
Neil Roberts
51f3e28c1f bitmap: Don't try to token paste the typenames from stdint.h
Previously the functions for packing and unpacking pixels where
generated by token pasting together a function name along with its
type, like the following:

 _cogl_pack_ ## uint8_t

Then later in cogl-bitmap-conversion.c it would directly refer to the
function names without token pasting.

This wouldn't work however if the system headers define the stdint
types using #defines instead of typedefs because in that case the
function name generated using token pasting would get the expanded
type name but the reference that doesn't use token pasting wouldn't.

This patch adds an extra macro passed to the cogl-bitmap-packing.h
header which just has the type size. That way the function can be
defined like this instead:

 _cogl_pack_ ## 8

That should prevent it from hitting problems with #defined types.

https://bugzilla.gnome.org/show_bug.cgi?id=691945

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit d6b5d7085b004ebd48c1543b820331802395ee63)
2013-01-22 18:00:11 +00:00
Robert Bragg
50005a9364 build: update to build with automake 1.13
This make autogen.sh look for automake-1.13 and also updates all
Makefile.am files to no longer use the INCLUDES variable which automake
1.13 warns is deprecated by AM_CPPFLAGS.

https://bugzilla.gnome.org/show_bug.cgi?id=690891

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 5de5569e960102afe979a5f2f0403e1defebca62)
2013-01-22 18:00:05 +00:00
Robert Bragg
ebdac3162a tests: flag backface culling failure without NPOT support
This marks that test-backface-culling is currently known to fail without
NPOT texture support. This allows us do a 1.13 snapshot release before
we find a fix for this.
2013-01-22 17:48:19 +00:00
Robert Bragg
1d31055ddb disable viewport scissor workaround for clear
We have a workaround in Cogl to fix viewport clipping with Mesa Intel
Gen 6 drivers but this was breaking the semantics of
cogl_framebuffer_clear() which should not be affected by viewport
clipping. This makes sure we disable and restore the workaround when
clearing the framebuffer. This fixes Clutter's test-cogl-viewport
conformance test.
2013-01-22 17:48:19 +00:00
Neil Roberts
3a041ef41b Reorder some struct members to avoid padding due to alignment
This tweaks the ordering of some struct members in some of the more
important structs so that the compiler won't insert wasted padding to
avoid breaking the alignment. Some members that were previously
unsigned long have been changed to unsigned int. These members need to
be able to fit in 32-bits to run on 32-bit machines anyway so there's
no point in having them extend to 64-bit on 64-bit machines. This
doesn't affect the public API.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit b721af236680005464e39f7f4dd11381d95efb16)
2013-01-22 17:48:19 +00:00
Neil Roberts
7572fedeaa Fix filling the array of texture pointers for sliced textures
In commit 1fa7c0f10a8a0 the sliced texture code which creates the
array of pointers to the texture slices was changed so that the
textures are appended to the end of the array instead of initially
creating the array with the right size upfront and then shrinking the
array on error. However it was then still also setting the size of the
array after creating it so the new textures would actually end up in
an unused part of the array. The part of the array that is used was
left unitialised so it would crash. This just removes the call to set
the size of the array.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 7df09d505ba28a1a960df867346af67118e96718)
2013-01-22 17:48:19 +00:00
Neil Roberts
82615e292d Don't try to use clip planes on GL3
GL3 has support for clip planes but they are used differently and
involve writing to a builtin output variable in the vertex shader. The
current clip plane code assumes it is only used with a fixed function
driver and tries to directly push to the matrix builtins. This
obviously won't work on GL3 so for now let's just disable clip planes.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 5f621589467ab961f5130590298dc8e26d658a92)
2013-01-22 17:48:19 +00:00
Neil Roberts
988486ac7d framebuffer: Support the GL_RED texture workaround when querying bits
When a component-alpha texture is made using a GL3 context a GL_RED
texture is actually used and a swizzle is set up to hide it. However
if a framebuffer is then bound to that texture then when the bits are
queried this workaround will leak out of the API. To fix this it now
detects the situation and reports the number of red bits as the number
of alpha bits.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 425cfb2675912a2cbcaaaeed7c2196d563948222)
2013-01-22 17:48:19 +00:00
Neil Roberts
520ccba49d Query the framebuffer stencil bits instead of assuming it's global
Previously when the context was initialised Cogl would query the
number of stencil bits and set a private feature flag to mark that it
can use the buffer for clipping if there was at least 3. The problem
with this is that the number of stencil bits returned by
GL_STENCIL_BITS depends on the currently bound framebuffer. This patch
adds an internal function to query the number of stencil bits in a
framebuffer and makes it use that instead when determining whether it
can push the clip using the stencil buffer.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit e928d21516a6c07798655341f4f0f8e3c1d1686c)
2013-01-22 17:48:18 +00:00
Neil Roberts
109e576b1f Add a public cogl_framebuffer_get_depth_bits() function
Cogl publicly exposes the depth buffer state so we might as well have
a function to query the number of depth bits of a framebuffer.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 853143eb10387f50f8d32cf09af31b8829dc1e01)
2013-01-22 17:48:18 +00:00
Neil Roberts
0b01c91fc5 framebuffer: Bind the framebuffer before querying the bits
The GL framebuffer driver now makes sure to bind the framebuffer
before counting the number of bits. Previously it would just query the
number of bits for whatever framebuffer happened to be used last.

In addition the virtual for querying the framebuffer bits has been
modified to take a pointer to a structure instead of a separate
pointer to each component. This should make it slightly more efficient
and easier to maintain.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit e9c58b2ba23a7cebcd4e633ea7c3191f02056fb5)
2013-01-22 17:48:18 +00:00
Neil Roberts
41612bfc74 Add a test for getting the component sizes from different fbs
This adds a test which creates two offscreen framebuffers, one with
just an alpha component texture and the other will a full RGBA
texture. The bit sizes of both framebuffers are then checked to verify
that they either have or haven't got bits for the RGB components.

The test currently fails because the framebuffer functions don't bind
the framebuffer before querying so they just query whichever
framebuffer happened to be used last.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 7ca01373efe908efc9f18f2cb7f4a51c274ef677)
2013-01-22 17:48:18 +00:00
Neil Roberts
fe3aa8b8b3 Query glX* functions before getting the context to fix GL3 driver
The GL3 context is created using the glXCreateContextAttribs function
which is part of the GLX_ARB_create_context extension. However
previously the function pointers from GLX extensions were only
retrieved once the GL context is created. That meant that the GL3
context creation function would always assume that the extension is
not supported so it would always fail.

This patch changes it to query the functions when the renderer is set
up instead. The base winsys feature flags that are determined while
querying the functions are stored in a member of CoglGLXRenderer.
These are then copied to the CoglContext when it is initialised.

The spec for glXGetProcAddress says that the functions returned are
context-independent. That implies that it is safe to call it without
binding a context although that is not explicitly stated as far as I
can tell. A big of googling finds this DRI documentation which says it
can be used without a context:

http://dri.freedesktop.org/wiki/glXGetProcAddressNeverReturnsNULL

And also this code sample:

http://www.opengl.org/wiki/Tutorial:_OpenGL_3.0_Context_Creation_%28GLX%29

One point that makes me concerned that this might not always work in
practice is that the code in SDL2 to create a GL3 context first
creates a dummy GL2 context in order to have something bound before it
calls glXGetProcAddress. I think this may just be a misunderstanding
based on how wglGetProcAddress works however.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 04a7aca9a98e84e43ac5559305a1358112902e30)
2013-01-22 17:48:18 +00:00
Neil Roberts
1e6ec66330 Add a conformance test for some wrap modes on a rectangle texture
This adds a conformance test which renders a rectangle texture using
the two wrap modes clamp-to-edge and repeat. It then verifies that the
correct region of the texture is drawn for the texture coordinates
that are > 1.0.

The test currently always fails. The cogl_framebuffer_draw_rectangle
function is documented to always take normalized texture coordinates
regardless of the coordinate system of the texture. This works
correctly if all of the texture coordinates are in the range [0.0,1.0]
because cogl-primitives uses a different code path in that case.
However if the multiple-quad code path is taken then the coordinates
actually need to un-normalized for it to work.

There is a comment in cogl_meta_texture_foreach_in_region() which
implies that the incoming coordinates should always be normalized.
The documentation for the callback says that the resulting sub-texture
coordinates will always be in the coordinate system of the low-level
texture. However it doesn't work out like this because the meta
texture function uses the span iterating function which always returns
normalized coordinates. It looks like there needs to be some more
conversions going on somewhere.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit d2059bb32b8015060e10f41dbbb68d4230b47ddb)
2013-01-22 17:48:18 +00:00
Neil Roberts
671275ba36 Also flip the virtual coordinates when iterating spans
_cogl_texture_spans_foreach_in_region first swaps over the texture
coordinates if they are flipped so that it can always iterate in a
positive direction. It sets a flag so that it will remember that the
coordinates are flipped. Before invoking the callback it is meant to
reflip the coordinates so that the callee doesn't need to be aware of
the flipping. However it was only flipping the sub-texture coordinates
and not the virtual coordinates. This was causing sliced textures to
draw their slice rectangles with the wrong geometry.
test-backface-culling was failing because of this.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit e7338a1e09cb22151374aefa6f0bb58485af9189)
2013-01-22 17:48:18 +00:00
Neil Roberts
a2aa04f219 texture-2d-slice: Fix the foreach_sub_texture_in_region implementation
There were a few problems with the sub texture iterating code of
sliced textures which were causing some conformance tests to fail when
NPOT textures are disabled:

• The spans are stored in un-normalized coordinates and the
  coordinates passed to the foreach function are normalized. The
  function was trying to un-normalize them before passing them to the
  span iterator code but it was using the wrong factor which was
  causing it to actually doubley normalize them.

• The shim function to renormalize the coordinates before passing them
  to the callback was renormalizing the sub-texture coordinates
  instead of the virtual coordinates. The sub-texture coordinates are
  already in the right scale for whatever is the underlying texture so
  we don't need to touch them. Instead we need to normalize the
  virtual coordinates because these are coming from the un-normalized
  coordinates that we passed to the span iterating code.

• The normalize factors passed to the span iterating were always 1.
  The code uses this normalizing factor to round the incoming
  coordinates to the nearest multiple of a full texture. It divides
  the coordinates by the factor rather than multiplying so it looks
  like we should be passing the virtual texture size here.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit c9773566b0ec0a17b34c440090529de8cff9609e)
2013-01-22 17:48:18 +00:00
Robert Bragg
2c0d48324f texture: Adds cogl_texture_set_data convenience api
This adds a cogl_texture_set_data function that is basically just a
convenience wrapper around cogl_texture_set_region. In the common case
where you want to upload the full contents of a mipmap level though this
api takes 4 less arguments (6 in total) so it's a bit simpler.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit e651dbdc4e4f03016a3dee513e3680270a4a9142)
2013-01-22 17:48:17 +00:00
Robert Bragg
73e8a6d7ce Allow lazy texture storage allocation
Consistent with how we lazily allocate framebuffers this patch allows us
to instantiate textures but still specify constraints and requirements
before allocating storage so that we can be sure to allocate the most
appropriate/efficient storage.

This adds a cogl_texture_allocate() function that is analogous to
cogl_framebuffer_allocate() which can optionally be called to explicitly
allocate storage and catch any errors. If this function isn't used
explicitly then Cogl will implicitly ensure textures are allocated
before the storage is needed.

It is generally recommended to rely on lazy storage allocation or at
least perform explicit allocation as late as possible so Cogl can be
fully informed about the best way to allocate storage.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 1fa7c0f10a8a03043e3c75cb079a49625df098b7)

Note: This reverts the cogl_texture_rectangle_new_with_size API change
that dropped the CoglError argument and keeps the semantics of
allocating the texture immediately. This is because Mutter currently
uses this API so we will probably look at updating this later once
we have a corresponding Mutter patch prepared. The other API changes
were kept since they only affected experimental api.
2013-01-22 17:48:17 +00:00
Robert Bragg
5a814e386a texture: add width/height members to base CoglTexture
There was a lot of redundancy in how we tracked the width and height of
different texture types which is greatly simplified by adding width and
height members to CoglTexture directly and removing the get_width and
get_height vfuncs from CoglTextureVtable

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 3236e47723e4287d5e0023f29083521aeffc75dd)
2013-01-22 17:48:17 +00:00
Robert Bragg
0850eea162 Move _cogl_texture_get_gl_format to -texture-gl.c
This moves the _cogl_texture_get_gl_format function from cogl-texture.c
to cogl-texture-gl.c and renames it _cogl_texture_gl_get_format.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit f8deec01eff7d8d9900b509048cf1ff1c86ca879)
2013-01-22 17:48:17 +00:00
Robert Bragg
a57195d16d framebuffer: split out GL read_pixels code
This moves the direct use of GL in cogl-framebuffer.c for handling
cogl_framebuffer_read_pixels_into_bitmap() into
driver/gl/cogl-framebuffer-gl.c and adds a
->framebuffer_read_pixels_into_bitmap vfunc to CoglDriverVtable.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 2f893054d6754e6bc7983f061b27c7858f1a593c)
2013-01-22 17:48:17 +00:00
Robert Bragg
36c85da3b8 Remove cogl-internal.h
This remove cogl-internal.h in favour of using cogl-private.h. Some
things in cogl-internal.h were moved to driver/gl/cogl-util-gl-private.h
and the _cogl_gl_error_to_string function whose prototype was moved from
cogl-internal.h to cogl-util-gl-private.h has had its implementation
moved from cogl.c to cogl-util-gl.c

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 01cc82ece091aa3bec4c07fdd6bc9e5135fca573)
2013-01-22 17:48:17 +00:00