admin管理员组文章数量:1417425
One can benchmark regular JavaScript functions by counting how many times he could call those functions in a second. On WebGL, though, functions such as gl.drawArrays
are async, so you can't measure the time the shader takes by benchmarking the API call.
Is there any way to benchmark WebGL functions?
One can benchmark regular JavaScript functions by counting how many times he could call those functions in a second. On WebGL, though, functions such as gl.drawArrays
are async, so you can't measure the time the shader takes by benchmarking the API call.
Is there any way to benchmark WebGL functions?
Share Improve this question asked Aug 2, 2016 at 0:42 MaiaVictorMaiaVictor 53.1k47 gold badges158 silver badges302 bronze badges2 Answers
Reset to default 4It's very difficult to benchmark a shader because there's a ton of context and they are very GPU specific.
You might be able to tell if one shader is faster than another by using performance.now
before and after drawing a bunch of stuff with that shader (a few thousand to million draw calls) then stalling the GPU by calling gl.readPixels
. It will tell you which is faster. It won't tell you how fast they are since stalling the GPU includes the starting and stalling time.
Think of a race car. For a dragster you time acceleration to dest. For a race car you time one lap going full speed. You let the car go one lap first before timing, you time the 2nd lap, the car crosses the starting line going full speed and the finish line also going full speed. So, you get the car's speed where as for the dragster you get its acceleration (irrelevant to GPUs generally since if you're going for speed you should never start and stop them).
Another way to time without adding in the start/stop time is to draw a bunch between requestAnimationFrame
frames. Keep increasing the amount until the time between frames jumps up a whole frame. Then pare the amounts between shaders.
There's other issues though in actual usage. For example a tiled GPU (like PowerVR on many mobile devices) attempts to cull parts of primitives that will be overdrawn. So a heavy shader with lots of overdraw on a non-tiled GPU might be plenty fast on a tiled GPU.
Also make sure you're timing the right thing. If you're timing a vertex shader you probably want to make your canvas 1x1 pixel and you're fragment shader as simple as possible and pass a lot of vertices in one draw call (to remove the call time). If you're timing a fragment shader then you probably want a large canvas and a set of vertices that contains several full canvas quads.
Also see WebGL/OpenGL: paring the performance
There's no way to get exact shader execution time without maybe some GPU vendor-specific tools. However, in addition to gman's suggestion there is EXT_disjoint_timer_query
extension which allows to measure execution time of your draw call, which in it's turn significantly depends on shader execution time, especially when your shaders are quite heavy (thus taking majority of time GPU spent execution your draw calls).
本文标签: javascriptHow do I benchmark a WebGL shaderStack Overflow
版权声明:本文标题:javascript - How do I benchmark a WebGL shader? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1745264855a2650537.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论