1
0
Fork 0
Commit graph

208 commits

Author SHA1 Message Date
Robert Bragg
6f480c7530 texture: remove _cogl_texture_prepare_for_upload
This removes the gl centric _cogl_texture_prepare_for_upload api from
cogl-texture.c and instead adds a _cogl_bitmap_convert_for_upload() api
which everything now uses instead. GL specific code that needed the gl
internal/format/type enums returned by _cogl_texture_prepare_for_upload
now use ->pixel_format_to_gl directly.

Since there was a special case optimization in
cogl_texture_new_from_file that aimed to avoid copying the temporary
bitmap that's created for the given file and allow conversions to
happen in-place the new _cogl_bitmap_convert_for_upload() api supports
converting in place depending on a 'can_convert_in_place' argument.

This ability to convert bitmaps in-place has been integrated across the
different components as appropriate.

In updating cogl-texture-2d-sliced.c this was able to remove a number of
other GL specific parts to how spans are setup.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit e190dd23c655da34b9c5c263a9f6006dcc0413b0)

Conflicts:
	cogl/cogl-auto-texture.c
	cogl/cogl.symbols
2013-07-29 16:31:44 +01:00
Neil Roberts
2f4d66f950 Don't create a layer when enabling texture coordinate attributes
When a primitive is drawn with an attribute that contains texture
coordinates Cogl will fetch the corresponding layer in order to
determine the unit number. However if the pipeline didn't actually
have a layer it would end up redundantly creating it. It's probably
not a good idea to be modifying the pipeline while flushing the
attributes state so this patch makes it pass the no-create flag to the
get_layer function and then skips out enabling the attribute if the
layer didn't already exist.

https://bugzilla.gnome.org/show_bug.cgi?id=702570

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 7507ad1a55a2aeb5beb8c0e3343e1e1f2805ddde)
2013-07-01 13:36:56 +01:00
Neil Roberts
2933423ca7 Don't generate GLSL for the point size for default pipelines
Previously on GLES2 where there is no builtin point size uniform then
we would always add a line to the vertex shader to write to the
builtin point size output because when generating the shader it is not
possible to determine if the pipeline will be used to draw points or
not. This patch changes it so that the default point size is 0.0f
which is documented to have undefined results when drawing points.
That way we can avoid adding the point size code to the shader in that
case. The assumption is that any application that is drawing points
will probably have explicitly set the point size on the pipeline
anyway so it is not a big deal to change the default size from 1.0f.

This adds a new pipeline state flag to track whether the point size is
non-zero. This needs to be its own state because altering it needs to
cause a different shader to be added to the pipeline cache. The state
flags that affect the vertex shader have been changed from a constant
to a runtime function because they will be different depending on
whether there is a builtin point size uniform.

There is also a unit test to ensure that changing the point size does
or doesn't generate a new shader depending on the values.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit b2eba06e16b587acbf5c57944a70ceccecb4f175)

Conflicts:
	cogl/cogl-pipeline-private.h
	cogl/cogl-pipeline-state-private.h
	cogl/cogl-pipeline-state.c
	cogl/cogl-pipeline.c
2013-06-21 14:18:37 +01:00
Neil Roberts
534e535a28 Use the Wayland embedded linked list implementation instead of BSD's
This removes cogl-queue.h and adds a copy of Wayland's embedded list
implementation. The advantage of the Wayland model is that it is much
simpler and so it is easier to follow. It also doesn't require
defining a typedef for every list type.

The downside is that there is only one list type which is a
doubly-linked list where the head has a pointer to both the beginning
and the end. The BSD implementation has many more combinations some of
which we were taking advantage of to reduce the size of critical
structs where we didn't need a pointer to the end of the list.

The corresponding changes to uses of cogl-queue.h are:

• COGL_STAILQ_* was used for onscreen the list of events and dirty
  notifications. This makes the size of the CoglContext grow by one
  pointer.

• COGL_TAILQ_* was used for fences.

• COGL_LIST_* for CoglClosures. In this case the list head now has an
  extra pointer which means CoglOnscreen will grow by the size of
  three pointers, but this doesn't seem like a particularly important
  struct to optimise for size anyway.

• COGL_LIST_* was used for the list of foreign GLES2 offscreens.

• COGL_TAILQ_* was used for the list of sub stacks in a
  CoglMemoryStack.

• COGL_LIST_* was used to track the list of layers that haven't had
  code generated yet while generating a fragment shader for a
  pipeline.

• COGL_LIST_* was used to track the pipeline hierarchy in CoglNode.

The last part is a bit more controversial because it increases the
size of CoglPipeline and CoglPipelineLayer by one pointer in order to
have the redundant tail pointer for the list head. Normally we try to
be very careful about the size of the CoglPipeline struct. Because
CoglPipeline is slice-allocated, this effectively ends up adding two
pointers to the size because GSlice rounds up to the size of two
pointers.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 13abf613b15f571ba1fcf6d2eb831ffc6fa31324)

Conflicts:
	cogl/cogl-context-private.h
	cogl/cogl-context.c
	cogl/driver/gl/cogl-pipeline-fragend-glsl.c
	doc/reference/cogl-2.0-experimental/Makefile.am
2013-06-13 13:45:47 +01:00
Neil Roberts
ed510dbe6d Use a GList instead of a BSD list for CoglPipelineSnippetList
Previously CoglPipelineSnippetList was using the BSD embedded list
type with a mini struct to combine the list node with a pointer to the
snippet. This is effectively equivalent to just using a GList so we
might as well do that. This will help if we eventually want to get rid
of cogl-queue.h

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 54a168f3c7829c427d54ab517533bb9f7384d022)
2013-06-13 13:45:46 +01:00
Neil Roberts
6d51a18e7c Add support for per-vertex point sizes
This adds a new function to enable per-vertex point size on a
pipeline. This can be set with
cogl_pipeline_set_per_vertex_point_size(). Once enabled the point size
can be set either by drawing with an attribute named
'cogl_point_size_in' or by writing to the 'cogl_point_size_out'
builtin from a snippet.

There is a feature flag which must be checked for before using
per-vertex point sizes. This will only be set on GL >= 2.0 or on GLES
2.0. GL will only let you set a per-vertex point size from GLSL by
writing to gl_PointSize. This is only available in GL2 and not in the
older GLSL extensions.

The per-vertex point size has its own pipeline state flag so that it
can be part of the state that affects vertex shader generation.

Having to enable the per vertex point size with a separate function is
a bit awkward. Ideally it would work like the color attribute where
you can just set it for every vertex in your primitive with
cogl_pipeline_set_color or set it per-vertex by just using the
attribute. This is harder to get working with the point size because
we need to generate a different vertex shader depending on what
attributes are bound. I think if we wanted to make this work
transparently we would still want to internally have a pipeline
property describing whether the shader was generated with per-vertex
support so that it would work with the shader cache correctly.
Potentially we could make the per-vertex property internal and
automatically make a weak pipeline whenever the attribute is bound.
However we would then also need to automatically detect when an
application is writing to cogl_point_size_out from a snippet.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 8495d9c1c15ce389885a9356d965eabd97758115)

Conflicts:
	cogl/cogl-context.c
	cogl/cogl-pipeline-private.h
	cogl/cogl-pipeline.c
	cogl/cogl-private.h
	cogl/driver/gl/cogl-pipeline-progend-fixed.c
	cogl/driver/gl/gl/cogl-pipeline-progend-fixed-arbfp.c
2013-06-07 16:53:29 +01:00
Robert Bragg
eb7fafe700 tests: Adds our first white-box unit test
This adds a white-box unit test that verifies that GL_BLEND is disabled
when drawing an opaque rectangle, enabled when drawing a transparent
rectangle and then disabled again when drawing a transparent rectangle
but with a blend string that effectively disables blending.

This shares the test utilities and launcher infrastructure we are using
for conformance tests so we get consistent reporting and so unit tests
will be run against a range of different drivers.

This adds a --enable-unit-tests configure option which is enabled by
default but if disabled will make all UNIT_TESTS() into static inline
functions that we should expect the compiler to discard since they won't
be referenced by anything.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 9047cce06bbf9051ec77e622be2fdbb96ed767a8)
2013-06-06 21:27:16 +01:00
Robert Bragg
8f9151303d pipeline: improve real_blend_enable checks
Since _cogl_pipeline_update_blend_enable() can sometimes show up quite
high in profiles; instead of calling
_cogl_pipeline_update_blend_enable() whenever we change pipeline state
that may affect blending we now just set a dirty flag and when we flush
a pipeline we check this dirty flag and lazily calculate whether blender
really needs to be enabled if it's set.

Since it turns out we were too optimistic in assuming most GL drivers
would recognize blending with ADD(src,0) is equivalent to disabling
GL_BLEND we now check this case ourselves so we can always explicitly
disable GL_BLEND if we know we don't need blending.

This introduces the idea of an 'unknown_color_alpha' boolean to the
pipeline flush code which is set whenever we can't guarantee that the
color attribute is opaque. For example this is set whenever a user
specifies a color attribute with 4 components when drawing a primitive.
This boolean needs to be cached along with every pipeline because
pipeline::real_blend_enabled depends on this and so we need to also call
_cogl_pipeline_update_blend_enable() if the status of this changes.

Incidentally with this patch we now no longer ever use
_cogl_pipeline_set_blend_enable() internally. For now the internal api
hasn't been removed though since we might want to consider re-purposing
it as a public api since it will now not conflict with our own internal
state tracking and could provide a more convenient way to disable
blending than setting a blend string.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit ab2ae18f3207514c91fa6fd9f2d3f2ed93a86497)
2013-06-06 21:27:09 +01:00
Daniel Stone
ea7d3b8476 Add fence API
cogl_framebuffer_add_fence creates a synchronisation fence, which will
invoke a user-specified callback when the GPU has finished executing all
commands provided to it up to that point in time.

Support is currently provided for GL 3.x's GL_ARB_sync extension, and
EGL's EGL_KHR_fence_sync (when used with OpenGL ES).

Signed-off-by: Daniel Stone <daniel@fooishbar.org>
Reviewed-by: Neil Roberts <neil@linux.intel.com>
Reviewed-by: Robert Bragg <robert@linux.intel.com>

https://bugzilla.gnome.org/show_bug.cgi?id=691752

(cherry picked from commit e6d37470da9294adc1554c0a8c91aa2af560ed9f)
2013-05-28 21:36:03 +01:00
Robert Bragg
2fa7b5573d gl: ensure depth isn't masked during clear
If a pipeline has been flushed that disables depth writing and then we
try to clear the framebuffer with cogl_framebuffer_clear4f, passing
COGL_BUFFER_BIT_DEPTH then we need to make sure that depth writing is
re-enabled before issuing the glClear call. We also need to make sure
that when the next primitive is flushed that we re-check what state the
depth mask should be in.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 3cf497042897d1aa6918bc55b71a36ff67e560b9)
2013-03-06 16:45:31 +00:00
Neil Roberts
a131b697d9 Add the layer's sampler and uniform declarations at the start
Previously the sampler uniform declarations such as cogl_sampler0 were
generated by walking the list of layers in the shader state. This had
two problems. Firstly it would only generate the declarations for
layers that have been referenced. If a layer has a combine mode of
replace then the samplers from previous layers couldn't be used by
custom snippets. Secondly it meant that the samplers couldn't be
referenced by functions in the declarations sections because the
samplers are declared too late.

This patch fixes it to generate the layer declarations in the backend
start function using all of the layers on the pipeline instead. In
addition it adds the sampler declarations to the vertex shader as they
were previously missing.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 1824df902bbb9995cae6ffb7a413913f2df35eef)

Conflicts:
	cogl/driver/gl/cogl-pipeline-fragend-glsl.c
	cogl/driver/gl/cogl-pipeline-vertend-glsl.c
2013-02-27 15:53:43 +00:00
Neil Roberts
956d39ac30 Add fragment and vertex snippet hooks for global declarations
This adds hook points to add global function and variable declarations
to either the fragment or vertex shader. The declarations can then be
used by subsequent snippets. Only the ‘declarations’ string of the
snippet is used and the code is directly put in the global scope near
the top of the shader.

The reason this is necessary rather than just adding a normal snippet
with the declarations is that for the other hooks Cogl assumes that
the snippets are independent of each other. That means if a snippet
has a replace string then it will assume that it doesn't even need to
generate the code for earlier hooks which means the global
declarations would be lost.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit ebb82d5b0bc30487b7101dc66b769160b40f92ca)
2013-02-27 14:43:55 +00:00
Neil Roberts
fc86d0e12e win32: Minor build fixes for building for win32
This fixes some minor errors and warnings that were preventing Cogl
building with mingw32:

• cogl-framebuffer-gl.c was not including cogl-texture-private.h.
  Presumably something else ends up including that when building for
  GLX.

• The WGL winsys was not including cogl-error-private.h

• A call to strsplit in the WGL winsys was wrong.

• For some reason the test-wrap-rectangle-textures test was trying to
  include the GDKPixbuf header.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 5380343399f834d9f96ca3b137d49c9c2193900a)
2013-02-21 15:20:55 +00:00
Neil Roberts
29983a7e2c buffer: Don't set the invalidate hint when requesting read access
glMapBufferRange is documented to fail with GL_INVALID_OPERATION if
GL_MAP_INVALIDATE_BUFFER_BIT is set as well as GL_MAP_READ_BIT. I
guess this makes sense when only read access is requested because
there would be no point in reading back uninitialised data. However,
Clutter requests read/write access with the discard hint when
rendering to a CoglBitmap with Cairo. The data is new so the discard
hint makes sense but it also needs read access so that it can read
back the data it just wrote for blending.

This patch works around the GL restriction by skipping the discard
hints if read access is requested. If the buffer discard hint is set
along with read access it will recreate the buffer store as an
alternative way to discard the buffer as it does in the case where the
GL_ARB_map_buffer_range extension is not supported.

https://bugzilla.gnome.org/show_bug.cgi?id=694164

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 986675d6043e8701f2d65415cf72ffc91734debd)
2013-02-19 15:03:30 +00:00
Neil Roberts
f79e78f648 Don't #ifdef the call to glDiscardFramebuffer
When Cogl is compiled with support for both the GL and GLES drivers it
only includes the GL header and not the GLES header. That means in
that case it would not compile in the code for the
GL_EXT_discard_framebuffer extension even though it could be used on
the GLES driver. This patch makes it use the standard names for the
GL_COLOR, GL_STENCIL etc names instead of the _EXT suffixed names and
manually defines them if we are using the GLES headers. That way the
discard code can be used unconditionally.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 59c30292d0f3c28d6e0e08bc5bf3b4b10545d856)
2013-02-13 12:05:26 +00:00
Neil Roberts
7d9e075917 Bind the framebuffer before calling glDiscardFramebuffer
This patch just adds a call to _cogl_framebuffer_flush_state to ensure
the correct framebuffer is bound before discarding its buffers.
Previously it would presumably just discard the buffers of whatever
framebuffer happened to be used last.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 37c390a5b33d4f65ff6c834e9be2f8de716635ee)
2013-02-13 12:05:22 +00:00
Neil Roberts
1e39819c49 Fix a clear of an array allocated with alloca which had the wrong size
The array allocated for storing the difference flags for each layer in
cogl-pipeline-opengl.c was being cleared with the size of a pointer
instead of the size actually allocated for the array. Presumably this
would mean that if there is more than one layer it wouldn't clear the
array properly.

Also the size of the array was slightly wrong because it was allocating
the size of a pointer for each layer instead of the size of an
unsigned long.

This was originally reported by Jasper St. Pierre on #clutter.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 1e134dd7cd5317651be158a483c7cb2723ce8869)
2013-02-08 12:20:32 +00:00
Neil Roberts
da7971f6be Don't set GL_TEXTURE_MAX_LEVEL on GLES
GL_TEXTURE_MAX_LEVEL is not supported on GLES so we can't set it. It
looks like Mesa was letting us get away with this but on other drivers
it may cause errors. The enum is not defined in the GLES headers so it
was failing to compile unless the GL driver is also enabled.

The test-texture-mipmap-get-set test is now marked as n/a on GLES2
because it can't support limiting the sampled mipmaps.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit ba51c393818582b058f5f1e66cf8d13835ad10e5)

Conflicts:
	tests/conform/test-conform-main.c
2013-01-25 18:21:09 +00:00
Neil Roberts
5d6160c751 Replace some #if HAVE_COGL_GL lines with #ifdef
This was generating warnings when the GL driver is disabled.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit f26682dcc04642fed9db959c63d6c6e4261d2148)

Conflicts:
	cogl/cogl-auto-texture.c
2013-01-25 18:21:09 +00:00
Neil Roberts
82615e292d Don't try to use clip planes on GL3
GL3 has support for clip planes but they are used differently and
involve writing to a builtin output variable in the vertex shader. The
current clip plane code assumes it is only used with a fixed function
driver and tries to directly push to the matrix builtins. This
obviously won't work on GL3 so for now let's just disable clip planes.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 5f621589467ab961f5130590298dc8e26d658a92)
2013-01-22 17:48:19 +00:00
Neil Roberts
988486ac7d framebuffer: Support the GL_RED texture workaround when querying bits
When a component-alpha texture is made using a GL3 context a GL_RED
texture is actually used and a swizzle is set up to hide it. However
if a framebuffer is then bound to that texture then when the bits are
queried this workaround will leak out of the API. To fix this it now
detects the situation and reports the number of red bits as the number
of alpha bits.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 425cfb2675912a2cbcaaaeed7c2196d563948222)
2013-01-22 17:48:19 +00:00
Neil Roberts
520ccba49d Query the framebuffer stencil bits instead of assuming it's global
Previously when the context was initialised Cogl would query the
number of stencil bits and set a private feature flag to mark that it
can use the buffer for clipping if there was at least 3. The problem
with this is that the number of stencil bits returned by
GL_STENCIL_BITS depends on the currently bound framebuffer. This patch
adds an internal function to query the number of stencil bits in a
framebuffer and makes it use that instead when determining whether it
can push the clip using the stencil buffer.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit e928d21516a6c07798655341f4f0f8e3c1d1686c)
2013-01-22 17:48:18 +00:00
Neil Roberts
109e576b1f Add a public cogl_framebuffer_get_depth_bits() function
Cogl publicly exposes the depth buffer state so we might as well have
a function to query the number of depth bits of a framebuffer.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 853143eb10387f50f8d32cf09af31b8829dc1e01)
2013-01-22 17:48:18 +00:00
Neil Roberts
0b01c91fc5 framebuffer: Bind the framebuffer before querying the bits
The GL framebuffer driver now makes sure to bind the framebuffer
before counting the number of bits. Previously it would just query the
number of bits for whatever framebuffer happened to be used last.

In addition the virtual for querying the framebuffer bits has been
modified to take a pointer to a structure instead of a separate
pointer to each component. This should make it slightly more efficient
and easier to maintain.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit e9c58b2ba23a7cebcd4e633ea7c3191f02056fb5)
2013-01-22 17:48:18 +00:00
Robert Bragg
73e8a6d7ce Allow lazy texture storage allocation
Consistent with how we lazily allocate framebuffers this patch allows us
to instantiate textures but still specify constraints and requirements
before allocating storage so that we can be sure to allocate the most
appropriate/efficient storage.

This adds a cogl_texture_allocate() function that is analogous to
cogl_framebuffer_allocate() which can optionally be called to explicitly
allocate storage and catch any errors. If this function isn't used
explicitly then Cogl will implicitly ensure textures are allocated
before the storage is needed.

It is generally recommended to rely on lazy storage allocation or at
least perform explicit allocation as late as possible so Cogl can be
fully informed about the best way to allocate storage.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 1fa7c0f10a8a03043e3c75cb079a49625df098b7)

Note: This reverts the cogl_texture_rectangle_new_with_size API change
that dropped the CoglError argument and keeps the semantics of
allocating the texture immediately. This is because Mutter currently
uses this API so we will probably look at updating this later once
we have a corresponding Mutter patch prepared. The other API changes
were kept since they only affected experimental api.
2013-01-22 17:48:17 +00:00
Robert Bragg
5a814e386a texture: add width/height members to base CoglTexture
There was a lot of redundancy in how we tracked the width and height of
different texture types which is greatly simplified by adding width and
height members to CoglTexture directly and removing the get_width and
get_height vfuncs from CoglTextureVtable

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 3236e47723e4287d5e0023f29083521aeffc75dd)
2013-01-22 17:48:17 +00:00
Robert Bragg
0850eea162 Move _cogl_texture_get_gl_format to -texture-gl.c
This moves the _cogl_texture_get_gl_format function from cogl-texture.c
to cogl-texture-gl.c and renames it _cogl_texture_gl_get_format.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit f8deec01eff7d8d9900b509048cf1ff1c86ca879)
2013-01-22 17:48:17 +00:00
Robert Bragg
a57195d16d framebuffer: split out GL read_pixels code
This moves the direct use of GL in cogl-framebuffer.c for handling
cogl_framebuffer_read_pixels_into_bitmap() into
driver/gl/cogl-framebuffer-gl.c and adds a
->framebuffer_read_pixels_into_bitmap vfunc to CoglDriverVtable.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 2f893054d6754e6bc7983f061b27c7858f1a593c)
2013-01-22 17:48:17 +00:00
Robert Bragg
36c85da3b8 Remove cogl-internal.h
This remove cogl-internal.h in favour of using cogl-private.h. Some
things in cogl-internal.h were moved to driver/gl/cogl-util-gl-private.h
and the _cogl_gl_error_to_string function whose prototype was moved from
cogl-internal.h to cogl-util-gl-private.h has had its implementation
moved from cogl.c to cogl-util-gl.c

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 01cc82ece091aa3bec4c07fdd6bc9e5135fca573)
2013-01-22 17:48:17 +00:00
Robert Bragg
cab4622eb3 matrix-stack: make CoglMatrixStack public
We have found several times now when writing code using Cogl that it
would really help if Cogl's matrix stack api was public as a utility
api. In Rig for example we want to avoid redundant arithmetic when
deriving the matrices of entities used to render and we aren't able
to simply use the framebuffer's matrix stack to achieve this. Also when
implementing cairo-cogl we found that it would be really useful if we
could have a matrix stack utility api.

(cherry picked from commit d17a01fd935d88fab96fe6cc0b906c84026c0067)
2013-01-22 17:48:11 +00:00
Neil Roberts
6bcfc8342a Fix handling of binding errors when uploading a full texture
Both the texture drivers weren't handling errors correctly when a
CoglPixelBuffer was used to set the contents of an entire texture.
This was causing it to hit an assertion failure in the pixel buffer
tests.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 888733d3c3b24080d2f136cedb3876a41312e4cf)
2013-01-22 17:48:09 +00:00
Robert Bragg
7011eb5db4 texture: expose mipmap level in set region apis
cogl_texture_set_region() and cogl_texture_set_region_from_bitmap() now
have a level argument so image data can be uploaded to a specific mipmap
level.

The prototype for cogl_texture_set_region was also updated to simplify
the arguments.

The arguments for cogl_texture_set_region_from_bitmap were reordered to
be consistent with cogl_texture_set_region with the source related
arguments listed first followed by the destination arguments.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 3a336a8adcd406b53731a6de0e7d97ba7932c1a8)

Note: Public API changes were reverted in cherry-picking this patch
2013-01-22 17:48:09 +00:00
Neil Roberts
af0dc3431d Fix spelling of _cogl_propagate_error
‘Propagate’ was misspelled as ‘propogate’.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 5fb4a6178c3e64371c01510690d9de1e8a740bde)
2013-01-22 17:48:08 +00:00
Robert Bragg
38921701e5 blit: avoid referring to framebuffer stack
This make _cogl_framebuffer_blit take explicit src and dest framebuffer
pointers and updates all the texture blitting strategies in cogl-blit.c
to avoid pushing/popping to/from the the framebuffer stack.

The removes the last user of the framebuffer stack which we've been
aiming to remove before Cogl 2.0

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 598ca33950a93dd7a201045c4abccda2a855e936)
2013-01-22 17:48:08 +00:00
Robert Bragg
e8eb9793e6 moves some gl texture code too cogl-texture-gl.c
This adds a driver/gl/cogl-texture-gl.c file and moves some gl specific
bits from cogl-texture.c into it. The moved symbols were also given a
_gl_ infix and the calling code was updated accordingly.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 2c9e81de70cc02d72b1ce9013c49e39300a05b6a)
2013-01-22 17:48:08 +00:00
Robert Bragg
da9f5e6179 bitmap: ret CoglError from _new_with_malloc_buffer
_cogl_bitmap_new_with_malloc_buffer() now takes a CoglError for throwing
exceptional errors and all callers have been updated to pass through
any application error pointer as appropriate.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 67cad9c0eb5e2650b75aff16abde49f23aabd0cc)
2013-01-22 17:48:08 +00:00
Robert Bragg
f53fb5e2e0 Allow propogation of OOM errors to apps
This allows apps to catch out-of-memory errors when allocating textures.

Textures can be pretty huge at times and so it's quite possible for an
application to try and allocate more memory than is available. It's also
very possible that the application can take some action in response to
reduce memory pressure (such as freeing up texture caches perhaps) so
we shouldn't just automatically abort like we do for trivial heap
allocations.

These public functions now take a CoglError argument so applications can
catch out of memory errors:

cogl_buffer_map
cogl_buffer_map_range
cogl_buffer_set_data
cogl_framebuffer_read_pixels_into_bitmap
cogl_pixel_buffer_new
cogl_texture_new_from_data
cogl_texture_new_from_bitmap

Note: we've been quite conservative with how many apis we let throw OOM
CoglErrors since we don't really want to put a burdon on developers to
be checking for errors with every cogl api call. So long as there is
some lower level api for apps to use that let them catch OOM errors
for everything necessary that's enough and we don't have to make more
convenient apis more awkward to use.

The main focus is on bitmaps and texture allocations since they
can be particularly large and prone to failing.

A new cogl_attribute_buffer_new_with_size() function has been added in
case developers need to catch OOM errors when allocating attribute buffers
whereby they can first use _buffer_new_with_size() (which doesn't take a
CoglError) followed by cogl_buffer_set_data() which will lazily allocate
the buffer storage and report OOM errors.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit f7735e141ad537a253b02afa2a8238f96340b978)

Note: since we can't break the API for Cogl 1.x then actually the main
purpose of cherry picking this patch is to keep in-line with changes
on the master branch so that we can easily cherry-pick patches.

All the api changes relating stable apis released on the 1.12 branch
have been reverted as part of cherry-picking this patch so this most
just applies all the internal plumbing changes that enable us to
correctly propagate OOM errors.
2013-01-22 17:48:07 +00:00
Robert Bragg
6e379fb54e attribute: Adds support for constant CoglAttributes
This makes it possible to create vertex attributes that efficiently
represent constant values without duplicating the constant for every
vertex. This adds the following new constructors for constant
attributes:

  cogl_attribute_new_const_1f
  cogl_attribute_new_const_2fv
  cogl_attribute_new_const_3fv
  cogl_attribute_new_const_4fv
  cogl_attribute_new_const_2f
  cogl_attribute_new_const_3f
  cogl_attribute_new_const_4f
  cogl_attribute_new_const_2x2fv
  cogl_attribute_new_const_3x3fv
  cogl_attribute_new_const_4x4fv

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 6507216f8030e84dcf2e63b8ecfe906ac47f2ca7)
2013-01-22 17:48:07 +00:00
Robert Bragg
56ae6f9afa progend-glsl: dirty prog for vertex state changes
_cogl_pipeline_progend_glsl_pre_change_notify and
_cogl_pipeline_progend_glsl_layer_pre_change_notify were only dirtying
the current program state for changes related to fragment processing.

This make both functions also check for changes that affect vertex
shader codegen.

This also fixes a mistake where
_cogl_pipeline_progend_glsl_layer_pre_change_notify was checking for
non-layer related changes which would never be seen, and instead it
should be checking for layer based changes only.
2013-01-22 17:48:07 +00:00
Robert Bragg
7fa04bb1a6 Adds back tex_coord array for CoglShader compatibility
This adds back compatibility for CoglShaders that reference the
cogl_tex_coord_in[] or cogl_tex_coord_out[] varyings. Unlike the
previous way this was done this patch maintains the use of layer numbers
for attributes and maintains forwards compatibility by letting shaders
alternatively access the per-layer tex_coord varyings via
cogl_tex_coord%i_in/out defines that index into the array.
2013-01-22 17:48:07 +00:00
Robert Bragg
1c449c67f6 Remove the varying array for tex_coords
This removes the need to maintain an array of tex_coord varyings and
instead we now just emit a varying per-layer uniquely named using a
layer_number infix like cogl_tex_coord0_out and cogl_tex_coord0_in.

Notable this patch also had to change the journal flushing code to use
pipeline layer numbers to determine the name of texture coordinate
attributes.

We now also break batches by using a deeper comparison of layers so
such that two pipelines with the same number of layers can now cause a
batch break if they use different layer numbers.

This adds an internal _cogl_pipeline_layer_numbers_equal() function that
takes two pipelines and returns TRUE if they have the same number of
layers and all the layer numbers are the same too, otherwise it returns
FALSE.

Where we used to break batches based on changes to the number of layers
we now break according to the status of
_cogl_pipeline_layer_numbers_equal

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit e55b64a9cdc93285049d9b969bef67484c2d9fb3)

Note: this will cause a temporary regression for the Cogl 1.x CoglShader
api since it will break compatibility with existing shaders that
reference the texture varyings from the fragment shader.

The intention is to follow up with another patch to add back
CoglShader compatibility.
2013-01-22 17:48:06 +00:00
Robert Bragg
31d105abd8 Check for out-of-memory when allocating 2d textures
This makes Cogl explicitly check for out-of-memory errors reported by
the opengl driver in cogl_texture_2d_new_with_size() calls. This allows
us to throw a COGL_SYSTEM_ERROR_NO_MEMORY error and return NULL so
applications may gracefully handle this condition.

This patch only affects the cogl_texture_2d_new_with_size() api not
_new_from_data() or _new_from_bitmap().

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 0283423dad59ba3d3e4cde400c29ac8e7803f888)
2013-01-22 17:48:06 +00:00
Neil Roberts
ff11a2b207 Use GL_ARB_texture_swizzle to emulate GL_ALPHA textures
The core profile of GL3 has removed support for component-alpha
textures. Previously the GL3 driver would just ignore this and try to
create them anyway. This would generate a GL error on Mesa.

To fix this the GL texture driver will now create a GL_RED texture
when GL_ALPHA textures are not supported natively. It will then set a
texture swizzle using the GL_ARB_texture_swizzle extension so that the
alpha component will be taken from the red component of the texture.
The swizzle is part of the texture object state so it only needs to be
set once when the texture is created.

The ‘gen’ virtual function of the texture driver has been changed to
also take the internal format as a parameter. The GL driver will now
set the swizzle as appropriate here.

The GL3 driver now reports an error if the texture swizzle extension
is not available because Cogl can't really work properly without out
it. The extension is part of GL 3.3 so it is quite likely that it has
wide support from drivers. Eventually we could get rid of this
requirement if we have our own GLSL front-end and we could generate
the swizzle ourselves.

When uploading or downloading texture data to or from a
component-alpha texture, we can no longer rely on GL to do the
conversion. The swizzle doesn't have any effect on the texture data
functions. In these cases Cogl will now force an intermediate buffer
to be used and it will manually do the conversion as it does for the
GLES drivers.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 32bacf81ebaa3be21a8f26af07d8f6eed6607652)
2013-01-22 17:48:04 +00:00
Robert Bragg
6cfc93f26f clip-stack: workaround intel gen6 viewport clip bug
The Intel Mesa gen6 driver doesn't currently handle scissoring offset
viewports correctly, so this implements a workaround to intersect the
current viewport bounds with the scissor rectangle.

(cherry picked from commit afc5daab85e5faca99d6d6866658cb82c3954830)
2013-01-22 17:48:04 +00:00
Neil Roberts
ec03357e88 Add cogl_buffer_map_range()
This adds a buffer method to map a subregion of the buffer. This works
using the GL_ARB_map_buffer_range extension. If the extension is not
available then it will fallback to using glMapBuffer to map the entire
buffer and then just add the offset to the returned pointer.

cogl_buffer_map() is now just a wrapper which maps the entire range of
the buffer. The driver backend functions have been renamed to
map_range and they now all take the offset and size arguments.

When the COGL_BUFFER_MAP_HINT_DISCARD hint is used and the map range
extension is available instead of using glBufferData to invalidate the
buffer it will instead pass the new GL_MAP_HINT_INVALIDATE_BUFFER
flag. There is now additionally a COGL_BUFFER_MAP_HINT_DISCARD_REGION
hint which can be used if the application only wants to discard the
small region that is mapped. glMapBufferRange is always used if it is
available even if the entire buffer is being mapped because it seems
more robust to pass those flags then to call glBufferData.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 55ca02b5ca9cafc750251ec974e0d6a536cb80b8)
2013-01-22 17:48:03 +00:00
Neil Roberts
2e77c039c9 Fix a warning when building without GLES2 support
Since commit 2701b93f cogl-pipeline-opengl.c always has code which
calls _cogl_pipeline_progend_glsl_get_attrib_location but the header
declaring this function was only included if GLES2 support was
enabled. This was making it give an annoying warning so let's just
unconditionally include it.

Reviewed-by: Robert Bragg <robert@linux.intel.com>
(cherry picked from commit 34143bc6f1239c9cb22ba613521ba9ee7ec7059a)
2013-01-22 17:48:02 +00:00
Neil Roberts
bf8a1b1eda progend-glsl: Fix handling of the builtin uniforms on non-GLES2
The ‘builtin uniforms’ are added to the GLSL code generated for the
GLES2 driver to implement missing fixed functionality such as the
builtin point sprite size and the alpha test reference. Previously the
code that accessed these was #ifdef'd to be compiled only when GLES2
was enabled. However since 2701b93f part of this code is now always
used even for non-GLES2 drivers. The code that accessed the builtin
uniforms array was however no longer #ifdef'd which meant that it
wouldn't compile any more if GLES2 was not enabled. This was further
broken becase the GL3 driver actually should be using the alpha test
uniform because that also does not provide any fixed functionality for
alpha testing.

To fix this the builtin uniform array is now always compiled in and
the code to access it is always used. A new member has been added to
the array to mark which private feature the uniform is used to
replace. That is checked before updating the uniform so that under
GLES2 it will update both uniforms but under GL3 it will only update
the alpha test reference.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 5469a25413883080df75a80153accf5d9124f716)
2013-01-22 17:48:02 +00:00
Neil Roberts
2616ae0fa9 Add a GL 3 driver
This adds a new CoglDriver for GL 3 called COGL_DRIVER_GL3. When
requested, the GLX, EGL and SDL2 winsyss will set the necessary
attributes to request a forward-compatible core profile 3.1 context.
That means it will have no deprecated features.

To simplify the explosion of checks for specific combinations of
context->driver, many of these conditionals have now been replaced
with private feature flags that are checked instead. The GL and GLES
drivers now initialise these private feature flags depending on which
driver is used.

The fixed function backends now explicitly check whether the fixed
function private feature is available which means the GL3 driver will
fall back to always using the GLSL progend. Since Rob's latest patches
the GLSL progend no longer uses any fixed function API anyway so it
should just work.

The driver is currently lower priority than COGL_DRIVER_GL so it will
not be used unless it is specificly requested. We may want to change
this priority at some point because apparently Mesa can make some
memory savings if a core profile context is used.

In GL 3, getting the combined extensions string with glGetString is
deprecated so this patch changes it to use glGetStringi to build up an
array of extensions instead. _cogl_context_get_gl_extensions now
returns this array instead of trying to return a const string. The
caller is expected to free the array.

Some issues with this patch:

• GL 3 does not support GL_ALPHA format textures. We should probably
  make this a feature flag or something. Cogl uses this to render text
  which currently just throws a GL error and breaks so it's pretty
  important to do something about this before considering the GL3
  driver to be stable.

• GL 3 doesn't support client side vertex buffers. This probably
  doesn't matter because CoglBuffer won't normally use malloc'd
  buffers if VBOs are available, but it might but worth making
  malloc'd buffers a private feature and forcing it not to use them.

• GL 3 doesn't support the default vertex array object. This patch
  just makes it create and bind a single non-default vertex array
  object which gets used just like the normal default object. Ideally
  it would be good to use vertex array objects properly and attach
  them to a CoglPrimitive to cache the state.

Reviewed-by: Robert Bragg <robert@linux.intel.com>

(cherry picked from commit 66c9db993595b3a22e63f4c201ea468bc9b88cb6)
2013-01-22 17:48:01 +00:00
Robert Bragg
de7416fd1d driver-gl: re-indent misleading if-else statement
There was a very, very, very misleading if else statement using no
braces for a single statement if block, followed by a blank line *and*
followed by a comment before the else which was aligned to the 'if'
column, all leading you to believe on first glance that there is no else
block. The fact that Neil and I were both separately mislead by this,
this week, is a pretty compelling reason to clarify this by deleting the
blank line, and moving the comment inside the else block.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 96d9ea78eb56269c0de5283a5302ab095d8bdfce)
2013-01-22 17:48:01 +00:00
Robert Bragg
01937201e4 Unify a lot of gles2 vs gl glsl code
Since we used to support hybrid fixed-function + glsl pipelines when
running with OpenGL there were numerous differences in how we handled
codegen and uniform updates between GLES2 and full OpenGL. Now that we
only support end-to-end glsl pipelines this patch can largely unify how
we handle GLES2 and OpenGL.

Most notably we now never use the builtin attribute names. This should
also make it easy for us to support creating strict OpenGL 3.1 contexts
where the builtin names have been removed.

Reviewed-by: Neil Roberts <neil@linux.intel.com>

(cherry picked from commit 2701b93f159bf2d3387cedf2d06fe921ad5641f3)
2013-01-22 17:48:01 +00:00