admin管理员组

文章数量:1279055

For a reaction time study (see also this question if you're interested) we want to control and measure the display time of images. We'd like to account for the time needed to repaint on different users' machines.

Edit: Originally, I used only inline execution for timing, and thought I couldn't trust it to accurately measure how long the picture was visible on the user's screen though, because painting takes some time.

Later, I found the event "MozAfterPaint". It needs a configuration change to run on users' puters and the corresponding WebkitAfterPaint didn't make it. This means I can't use it on users' puters, but I used it for my own testing. I pasted the relevant code snippets and the results from my tests below.
I also manually checked results with SpeedTracer in Chrome.

// from the loop pre-rendering images for faster display
var imgdiv = $('<div class="trial_images" id="trial_images_'+i+'" style="display:none"><img class="top" src="' + toppath + '"><br><img class="bottom" src="'+ botpath + '"></div>');
Session.imgs[i] = imgdiv.append(botimg);
$('#trial').append(Session.imgs);

// in Trial.showImages
$(window).one('MozAfterPaint', function () {
    Trial.FixationHidden = performance.now();
});
$('#trial_images_'+Trial.current).show(); // this would cause reflows, but I've since changed it to use the visibility property and absolutely positioned images, to minimise reflows
Trial.ImagesShown = performance.now();

Session.waitForNextStep = setTimeout(Trial.showProbe, 500); // 500ms    

// in Trial.showProbe
$(window).one('MozAfterPaint', function () {
    Trial.ImagesHidden = performance.now();
});
$('#trial_images_'+Trial.current).hide();
Trial.ProbeShown = performance.now();
// show Probe etc...

Results from paring the durations measured using MozAfterPaint and inline execution.

This doesn't make me too happy. First, the median display duration is about 30ms shorter than I'd like. Second, the variance using MozAfterPaint is pretty large (and bigger than for inline execution), so I can't simply adjust it by increasing the setTimeout by 30ms. Third, this is on my fairly fast puter, results for other puters might be worse.

Results from SpeedTracer

These were better. The time an image was visible was usually within 4 (sometimes) 10 ms of the intended duration. It also looked like Chrome accounted for the time needed to repaint in the setTimeout call (so there was a 504ms difference between the call, if the image needed to repaint). Unfortunately, I wasn't able to analyse and plot results for many trials in SpeedTracer, because it only logs to console. I'm not sure whether the discrepancy between SpeedTracer and MozAfterPaint reflects differences in the two browsers or something that is lacking in my usage of MozAfterPaint (I'm fairly sure I interpreted the SpeedTracer output correctly).

Questions

I'd like to know

  1. How can I measure the time it was actually visible on the user's machine or at least get parable numbers for a set of different browsers on different testing puters (Chrome, Firefox, Safari)?
  2. Can I offset the rendering & painting time to arrive at 500ms of actual visibility? If I have to rely on a universal offset, that would be worse, but still better than showing the images for such a short duration that the users don't see them consciously on somewhat slow puters.
  3. We use setTimeout. I know about requestAnimationFrame but it doesn't seem like we could obtain any benefits from using it:
    The study is supposed to be in focus for the entire duration of the study and it's more important that we get a +/-500ms display than a certain number of fps. Is my understanding correct?

Obviously, Javascript is not ideal for this, but it's the least bad for our purposes (the study has to run online on users' own puters, asking them to install something would scare some off, Java isn't bundled in Mac OS X browsers anymore).
We're allowing only current versions of Safari, Chrome, Firefox and maybe MSIE (feature detection for performance.now and fullscreen API, I haven't checked how MSIE does yet) at the moment.

For a reaction time study (see also this question if you're interested) we want to control and measure the display time of images. We'd like to account for the time needed to repaint on different users' machines.

Edit: Originally, I used only inline execution for timing, and thought I couldn't trust it to accurately measure how long the picture was visible on the user's screen though, because painting takes some time.

Later, I found the event "MozAfterPaint". It needs a configuration change to run on users' puters and the corresponding WebkitAfterPaint didn't make it. This means I can't use it on users' puters, but I used it for my own testing. I pasted the relevant code snippets and the results from my tests below.
I also manually checked results with SpeedTracer in Chrome.

// from the loop pre-rendering images for faster display
var imgdiv = $('<div class="trial_images" id="trial_images_'+i+'" style="display:none"><img class="top" src="' + toppath + '"><br><img class="bottom" src="'+ botpath + '"></div>');
Session.imgs[i] = imgdiv.append(botimg);
$('#trial').append(Session.imgs);

// in Trial.showImages
$(window).one('MozAfterPaint', function () {
    Trial.FixationHidden = performance.now();
});
$('#trial_images_'+Trial.current).show(); // this would cause reflows, but I've since changed it to use the visibility property and absolutely positioned images, to minimise reflows
Trial.ImagesShown = performance.now();

Session.waitForNextStep = setTimeout(Trial.showProbe, 500); // 500ms    

// in Trial.showProbe
$(window).one('MozAfterPaint', function () {
    Trial.ImagesHidden = performance.now();
});
$('#trial_images_'+Trial.current).hide();
Trial.ProbeShown = performance.now();
// show Probe etc...

Results from paring the durations measured using MozAfterPaint and inline execution.

This doesn't make me too happy. First, the median display duration is about 30ms shorter than I'd like. Second, the variance using MozAfterPaint is pretty large (and bigger than for inline execution), so I can't simply adjust it by increasing the setTimeout by 30ms. Third, this is on my fairly fast puter, results for other puters might be worse.

Results from SpeedTracer

These were better. The time an image was visible was usually within 4 (sometimes) 10 ms of the intended duration. It also looked like Chrome accounted for the time needed to repaint in the setTimeout call (so there was a 504ms difference between the call, if the image needed to repaint). Unfortunately, I wasn't able to analyse and plot results for many trials in SpeedTracer, because it only logs to console. I'm not sure whether the discrepancy between SpeedTracer and MozAfterPaint reflects differences in the two browsers or something that is lacking in my usage of MozAfterPaint (I'm fairly sure I interpreted the SpeedTracer output correctly).

Questions

I'd like to know

  1. How can I measure the time it was actually visible on the user's machine or at least get parable numbers for a set of different browsers on different testing puters (Chrome, Firefox, Safari)?
  2. Can I offset the rendering & painting time to arrive at 500ms of actual visibility? If I have to rely on a universal offset, that would be worse, but still better than showing the images for such a short duration that the users don't see them consciously on somewhat slow puters.
  3. We use setTimeout. I know about requestAnimationFrame but it doesn't seem like we could obtain any benefits from using it:
    The study is supposed to be in focus for the entire duration of the study and it's more important that we get a +/-500ms display than a certain number of fps. Is my understanding correct?

Obviously, Javascript is not ideal for this, but it's the least bad for our purposes (the study has to run online on users' own puters, asking them to install something would scare some off, Java isn't bundled in Mac OS X browsers anymore).
We're allowing only current versions of Safari, Chrome, Firefox and maybe MSIE (feature detection for performance.now and fullscreen API, I haven't checked how MSIE does yet) at the moment.

Share Improve this question edited Mar 2, 2023 at 12:35 sideshowbarker 88.3k29 gold badges215 silver badges212 bronze badges asked Jan 14, 2013 at 17:49 RubenRuben 3,53234 silver badges48 bronze badges 11
  • Since the browser will have to repaint regardless of how the image is hidden/shown, it's really just a "least terrible" scenario, I think. That is, any change to the browser window will incur a repaint (although possibly only to certain areas - like where the image is). Every browser will do this differently, in addition to every puter. I think your accepted variance on display time may need to be expanded to use an html/css/js solution. – Jordan Kasper Commented Jan 18, 2013 at 17:48
  • @jak just getting a good estimate of the variance would be nice. Especially on the user side but during pretesting would also help. – Ruben Commented Jan 18, 2013 at 19:31
  • every trial should have a well defined tolerance level. It seems to me that this inquiry ignores the principles of significant numbers (the idea that a certain point bees meaningless due to number of factors). For example how do you account for the variance in mouse drivers that may cause signal lag? I think you are putting more effort into this than you can reliably use. – patrickgamer Commented Jan 20, 2013 at 5:29
  • @patrick Do you mean these principles and what exactly do you mean? I can't account for everything and I'm happy with that. I want to do my best to account for what I can, though. If it was just about measurement biases, those would average out. But if the image is displayed for a too short time to some users, they simply won't get the treatment at all, I'd like to avoid that as much as possible. – Ruben Commented Jan 20, 2013 at 16:17
  • @patrick The idea with the tolerance level is nice. A deleted answer linked to a blog post where they describe doing this to solve a different problem, that doesn't exist with performance.now anymore (nonmonotonic time, system clock polling problems). My tolerance level would be about whether painting takes too long. But how do I find out whether my tolerance level was violated? This boils down to 1, right? – Ruben Commented Jan 20, 2013 at 16:19
 |  Show 6 more ments

2 Answers 2

Reset to default 5

Because I didn't get any more answers yet, but learnt a lot while editing this question, I'm posting my progress so far as an answer. As you'll see it's still not optimal and I'll gladly award the bounty to anyone who improves on it.

Statistics

  • In the leftmost panel you can see the distribution that led me to doubt the time estimates I was getting.
  • The middle panel shows what I achieved after caching selectors, re-ordering some calls, using some more chaining, minimising reflows by using visibility and absolute positioning instead of display.
  • The rightmost panel shows what I got after using an adapted function by Joe Lambert using requestAnimationFrame. I did that after reading a blogpost about rAF now having sub-millisecond precision too. I thought it would only help me to smooth animations, but apparently it helps with getting better actual display durations as well.

Results

In the final panel the mean for the "paint-to-paint" timing is ~500ms, the mean for inline execution timing scatters realistically (makes sense, because I use the same timestamp to terminate the inner loop below) and correlates with "paint-to-paint" timing.

There is still a good bit of variance in the durations and I'd love to reduce it further, but it's definitely progress. I'll have to test it on some slower and some Windows puters to see if I'm really happy with it, originally I'd hoped to get all deviations below 10ms.

I could also collect way more data if I made a test suite that does not require user interaction, but I wanted to do it in our actual application to get realistic estimates.

window.requestTimeout using window.requestAnimationFrame

window.requestTimeout = function(fn, delay) {
    var start = performance.now(),
        handle = new Object();
    function loop(){
        var current = performance.now(),
            delta = current - start;

        delta >= delay ? fn.call() : handle.value = window.requestAnimationFrame(loop);
    };
    handle.value = window.requestAnimationFrame(loop);
    return handle;
};

Edit:

An answer to another question of mine links to a good new article.

Did you try getting the initial milliseconds, and after the event is fired, calculate the difference? instead of setTimeout. something like:

var startDate = new Date();
var startMilliseconds = startDate.getTime();

// when the event is fired :
(...), function() {
    console.log(new Date().getTime() - startMilliseconds);
});

try avoiding the use of jQuery if possible. plain JS will give you better response times and better overall performance

本文标签: javascriptControl and measure precisely how long an image is displayedStack Overflow