Testing NVIDIA's Linux Threaded OpenGL Optimizations
Posted on: 10/18/2012 09:15 AM

Phoronix takes a look at the new NVIDIA 310.14 Beta driver for Linux

Testing NVIDIA's Linux Threaded OpenGL Optimizations


With the NVIDIA 310.14 Beta driver introduced at the beginning of this week there are some OpenGL performance improvements in general plus an experimental threaded OpenGL implementation that can be easily enabled. In this article are benchmarks from the NVIDIA GeForce GTX 680 with this new Linux driver release.

The 310.14 driver's release highlights explain the new OpenGL threaded optimizations as "Added experimental support for OpenGL threaded optimizations, available through the __GL_THREADED_OPTIMIZATIONS environment variable." The HTML documentation bundled with the driver binary goes on to explain:

"The NVIDIA OpenGL driver supports offloading its CPU computation to a worker thread. These optimizations typically benefit CPU-intensive applications, but might cause a decrease of performance in applications that heavily rely on synchronous OpenGL calls such as glGet*. Because of this, they are currently disabled by default.

Setting the __GL_THREADED_OPTIMIZATIONS environment variable to "1" before loading the NVIDIA OpenGL driver library will enable these optimizations for the lifetime of the application.



Printed from Linux Compatible (http://www.linuxcompatible.org/news/story/testing_nvidias_linux_threaded_opengl_optimizations.html)