What’s New in OpenGL ES 3.0

In conjunction with next-generation GPUs, OpenGL ES 3.0 will make a number of new features available to mobile and embedded devices. In terms of functionality, OpenGL ES 3.0 is largely a mobile implementation of the OpenGL 3.3 feature set, with a couple notable features missing and a few additional features plucked from later revisions of OpenGL. In terms of backwards compatibility only OpenGL 4.3 is a complete superset of OpenGL ES 3.0, but for most purposes OpenGL 3.1 is probably the closest desktop OpenGL specification.

On that note, though drawing a comparison to Direct3D isn’t particularly straightforward, since we get asked about it so much we’ll try to answer it. Direct3D of course had a major reachitecting with Direct3D 10 back in 2007, which added a number of features to the API  while giving Microsoft a chance to clean out a great deal of fixed-function legacy cruft. From a major feature perspective OpenGL did not reach parity with Direct3D 10 until OpenGL 3.2, which among other things introduced geometry shader support.

So if we had to place OpenGL ES 3.0 along a Direct3D continuum, as it’s primarily based on OpenGL 3.1, it would be somewhere between Direct3D feature level 9_3 and feature level 10_0, again primarily due to a lack of geometry shaders. Consequently, this is why some mobile GPUs like Adreno 320 can support OpenGL ES 3.0, but not D3D feature level 10_0. If you only implement the baseline OpenGL ES 3.0 feature set, then it won’t be enough for D3D 10_0 compliance.

Strictly Defined Pixel/Uniform/Frame Buffer Objects

The first major addition to OpenGL ES 3.0 is support for a number of buffer formats, alongside a general tightening up of the buffer format specifications. OpenGL ES 2.0’s buffer format specification had some ambiguity, which lead to GPU vendors sometimes implementing the same buffer format in slightly different ways, which in turn could lead to problems for developers.

OpenGL ES 3.0 also adds support for Uniform Buffer Objects, which is a useful and efficient data buffer type for use with shaders.


As is common with most OpenGL releases, OpenGL ES 3.0 includes a new version of the GL ES Shading Langauge, used to program shader effects. The primary addition for GLSL ES 3.0 is full support for 32bit integer and 32bit floating point (i.e. full precision) data types and operations.  Previously only lower precisions were supported, which are easier to compute (it takes less hardware and less memory), but as shader complexity increases the relatively large precision errors become even larger.

GLSL ES 3.0 has also seen some syntax and feature tweaks to make it more like desktop OpenGL. This doesn’t change the fact that only OpenGL 4.3 is a complete superset of OpenGL ES 3.0, but it makes it easier for developers used to desktop GLSL to work on GLSL ES and vice versa.

Occlusion Queries and Geometry Instancing

While OpenGL ES 3.0 doesn’t get geometry shaders, it does get several features to help with geometry in general. Chief among these are the addition of occlusion queries and geometry instancing. Occlusion queries allows for fast hardware testing of whether an object’s pixels are blocking (occluding) another object, which is helpful for quickly figuring out whether something can be skipped because it’s occluded. Meanwhile geometry instancing allows the hardware to draw the same object multiple times while only requiring the complete object to be submitted to the rendering pipeline once. This makes trees and other common objects easier for the CPU to set up as it doesn’t need to keep resubmitting the entire object in different locations.

Numerous Texture Features

OpenGL ES 3.0 adds support for a number of texture features; in fact it’s far too many to break down. The big additions are support for floating point textures (to go with the aforementioned FP32 support), 3D textures, depth textures, non-power-of-two textures, and 1 & 2 channel textures (R & R/G).

Multiple Render Targets

Multiple Render Target support allows the GPU to render to multiple textures at once. Simply put, the significance of this feature is that it’s necessary for practical real-time deferred rendering.

MSAA Render To Texture

When rendering to a texture, special consideration must be taken for anti-aliasing, which on earlier hardware generations is only available when run against the framebuffer. OpenGL ES 3.0 will add support for MSAA’d rendering to a texture.

Standardized Texture Compression Format: ETC

Wrapping up our look at OpenGL ES 3.0’s features, we have texture compression. One of the big problems for OpenGL for a number of years was that it didn’t have a standardized texture compression format. In the desktop space the earliest and most common texture compression format is S3TC, which is not available royalty-free, and as such cannot be a part of the core standard (instead only available as an extension). This is a problem that carried over to OpenGL ES, which led to vendors implementing their own incompatible texture compression standards. Among the major OpenGL ES GPUs, S3TC, PVRTC, ETC, and ATITC are the most common texture compression formats.

Because there isn’t a standard texture compression format in OpenGL ES 2.0, developers have to pack their textures multiple times for different hardware, which takes up time and more importantly space. This is a particular problem for Android developers since the platform supports multiple GPUs (versus the PowerVR-only iOS).

For OpenGL ES 3.0, Ericsson has offered up their ETC family of texture compression algorithms on a royalty free basis, allowing Khronos to implement a standard texture compression format and thereby over time resolving the issue of having multiple texture compression formats. Compared to where Khronos eventually wants to go ETC is somewhat dated at this point in time – it only offers 6:1 compression for RGB and 4:1 compression for RGBA – but it will get the job done for now.

For Khronos this is a huge breakthrough since texture compression is even more important on mobile devices than it is desktops, due to the much tighter memory bandwidth requirements. This also allows Khronos to evangelize texture compression to developers who had previously been shying away from using texture compression because of the aforementioned issues.

At the same time it will be interesting to see how developers adopt ETC. Just because it’s a standard doesn’t mean it has to be used, and while we can’t imagine Android developers not using it once OpenGL ES 3.0 is the baseline for applications, Apple bears keeping an eye on. As a PowerVR-only shop they have used PVRTC since day one, and so long as they don’t change to another brand of GPUs they wouldn’t need to actually back ETC.

Of course ETC isn’t the only texture compression format in the pipeline. Khronos is also working on the very interesting ASTC, which we’ll get to in a bit.

Introduction OpenGL 4.3 Specification Also Released
Comments Locked


View All Comments

  • bobvodka - Monday, August 6, 2012 - link

    Firstly using the Steam Hardware survey, which is the correct metric as we are a AAA games studio I'll grant you, at most, 5% of the market, the majority of which have Intel GPUs, for which the OpenGL implementation has generally been.. sub-par to put it mildly.

    Secondly all console development tools are on the PC and based around Visual Studio as such we work in Windows anyway.

    Thirdly the Windows version generally comes about because we need artists/developer tools . Right now it is also useful for learning about and testing 'next gen' ideas with an API which will be close to the XBox API

    Forthly; we have a windows version working which uses D3D11 and OpenGL offers no compelling reason to scrap all the work. Remember D3D had working compute shaders with a sane integration for some years now - OpenGL has only just got these and before doing the work with OpenCL was like opening a horrible can of worms due to the lack of standardised and required interop extensions which existed (I looked into this at the back end of last year for my own work at home and quickly dispaired at the state of OpenGL and its interop).

    Finally, OSX lags OpenGL development. Currently OSX10.7.3 (as per https://developer.apple.com/graphicsimaging/opengl... ) supports GL3.2 and I see no mention of the version being supported in 10.8. Given that OpenGL3.2 was released in 2009 and OSX10.7 was released last year I wouldn't pin my hopes on seeing 4.2 any time 'soon'.

    Now, supporting 'down market' hardware is of course a good thing to do however in D3D11 this is easy (feature levels) in OpenGL different hardware + drivers = different features which again increases engineering work load and the requirements for fallbacks.
    You could mandate 'required features' but at that point you start cutting out market share and that 5% looks smaller and smaller.

    Now, we ARE putting engineering effort into OpenGL|ES as mobile devices are an important corner stone from a business stand point thus the cost can be justified.

    In short; there is no compelling business nor technical reason at this junction to drop D3D11 in favor of OpenGL to capture a fragment of the 5% AAA 'home computer' market when there are no side benefits and only cost.
  • powerarmour - Monday, August 6, 2012 - link

    Yes because Carmack is always 100% right about everything, and the id Tech 5 engine is the greatest and most advanced around.
  • SleepyFE - Monday, August 6, 2012 - link

    id Tech 5 is awesome!! I don't like shooters (except for Prey) but i played Rage just to see how much "worse" OpenGL is. The game looks GREAT. I can't tell it from any other AAA game from the graphics alone. And that means OpenGL is good enough and should be used more. Screw what someone says, try it yourself then tell me OpenGL can't compete.
  • bobvodka - Monday, August 6, 2012 - link

    False logic - games are as good as their art work.

    OpenGL has shaders, so yes with good art work it can do the same as D3D - however the API itself, the thing the programmers have to work with - isn't as good AND up until now it was lacking feature parity with D3D11.

    Feature wise OpenGL is there.
    API/usability wise - it isn't.

    FYI; I used OpenGL for around 8 years from around 1.3 until 3.0 came out and, like a few, was so fed up of the ARB at this point that I gave up on GL and moved to a modern API, speaking from an interface design point of view.
  • Penti - Friday, August 10, 2012 - link

    Game engines are perfectly fine supporting different graphics API's. Obviously non Windows platforms won't run D3D. Microsoft does not license it. So while they do license stuff like ActiveSync/Exchange, exFAT (which should have been included in the SDXC spec under FRAND-terms but isn't), NTFS, remote desktop protocols, OpenXML, binary document formats, sharepoint protocols, some of the .NET environment etc most of the vital tech is against payed licensing. They don't even specifies the Direct3D API's for implementation for none hardware vendors. It's simply not referenced at all. OpenGL is thoroughly referenced in comparison.

    Even though PS3 isn't OGL (PSGL is OGLES based) you could still do Cg shaders, or convert HLSL or GLSL shaders or vise versa so it's not like skills are lost. Tools should be written against the game engines and middleware any way.

    Plus the desktop OGL is compatible with OGLES when it comes to the newer releases such as 4.1 and 4.3. Albeit with some tricks/configuration/compatibility modes. Then implementations sucks, but that will also be true for some graphics chips support for DX.
  • inighthawki - Monday, August 6, 2012 - link

    The tessellation feature you're referring to is a brand-specific hardware extension, and not the same class that DirectX's tessellation is. The tessellation hardware introduced for DX11 is a completely programmable pipeline that offers more flexibility. DirectX does not add support for hardware specific features for good reason.
  • djgandy - Tuesday, August 7, 2012 - link

    Tessellation was only added to the GL pipeline in 4.0. It was another one of those 'innovations' where GL copied DX, just like pretty much every other feature GL adds.

    What GL needs to do is copy DX when they remove stuff from the API. Scratch this stupid core/compatibility model, which just adds even more run-time configurations, remove all the old rubbish and do not allow mixing of new features with the old fixed function pipeline.
  • bobvodka - Tuesday, August 7, 2012 - link

    There was, 4 years ago, a plan to do just what you described in your second paragraph - Longs Peak was the code name and it was a complete change and clean up of the API with a modern design and it was a change universally praised by those of us following the ARB's news letters and design plans.

    In July 2007 they were 'close' to a release; in October they had 'some issues' to work out - they then went into radio silence and 6 months later, without bothering to tell anyone what was going on, they rolled out 'OpenGL3.0' aka 2.2 where all the grand API changes, worked on for 2 years, were thrown out the window, extensions bolted on again and no functionality removed.

    At this point myself, and quite a few others, made a loud noise and departed OpenGL development in favour of D3D10 and then D3D11.

    Four years on the ARB are continuing down the same path and I wouldn't bet my future on them seeing sense any time soon.
  • djgandy - Tuesday, August 7, 2012 - link

    The ARB think they are implementing features that developers want, and maybe they are, but AFAIK they have very few big selling developers anyway.

    It seems the ARB is unable to see the reason behind this, maybe because they are so concerned about the politics of backwards compatibility or least certain members of it are. For me this is the hardest part to understand, since it is not even real breaking of compatibility, it is simply ring fencing new features from old features thus saving a ton of driver writing hell (i.e what DX did). Instead you can still use begin end with your glsl arb and geometry shaders with a bit of fixed function fog over the top. How useful.

    I find it hard to even consider the GL API as an abstraction layer with the existing extension hell and the multiple profiles a driver can opt to support. The end result of this "compatibility" is anyone actually wanting to sell software using OpenGL has to pick the lowest common denominator...whatever that actually is, because you don't even know what you are getting till run time with the newer API, so then you just pick the ancient version of the API because at least you have a 99% chance that a GL 3.0 driver will be installed with all the old fixed function crud that you don't actually need, but glVertex3f is nice right?

    IMO GL's only hope is for a company like Apple to put it into a high volume product and actually deliver a good contract to developers (core profile only, limited extensions, and say GL 4.0).
  • bobvodka - Tuesday, August 7, 2012 - link

    Unfortunately Apple isn't very on the ball when it comes to OpenGL support.

    OSX10.7, released last year, only supports OpenGL 3.2, a spec released in 2009 and had Windows support within 2 months.

    Apple are focusing on mobile it would seem, where OpenGL|ES is saner and rules the roost.

Log in

Don't have an account? Sign up now