What is less well-known is the mechanism for this. As usual in neuroscience there is still no conclusive theory of the perception of time, but instead some competing theories and interesting experiments. In this study they suggest that their rats use head behavior as timing; basically, the rats started a dance, and compared the ending of the stimulus with where they were in the dance. In this paper we see that training can create reasonably accurate neural signals corresponding to interval timing tasks, but that the mechanism for these is still unknown. Finally in this paper we observe multiple timing mechanisms in use in humans; widespread counting mechanisms, a comparison system in "the temporal semantic system", and a switch in collation mechanisms between the default-mode network and the the motor network around 2 seconds. The motor network is active for durations in the hundreds of milliseconds, firing at 30-40 Hz, while the default-mode network is active for longer durations (above 2 seconds), firing at 10 Hz.
So, we can confirm the 1-second flow rule; after 1 second (closer to 1.5 according to experiment), the default-mode network gets activated and the user loses engagement with the task.
But the 10 second rule is pretty arbitrary, and the decision to use progress as the indicator even more so. Following Edward Tuft's simple proviso of "maximize the data-ink", we can create guidelines. For an indeterminate progress indicator, only the state is actual data, and the rest of the animation can be removed. A simple state indicator such as color suffices. I have been testing this myself for the past few years using my Static Throbber theme in Firefox. I can report that I now hate seeing the animated throbbers on other browsers; they just look tacky.
So, this works for short indicators, less than ten seconds. But, sometimes we have longer things. For example, while I was writing the last sentence, Firefox froze for 20 seconds, because of some JavaScript in a background tab. Well... that still doesn't count. The window manager can give Firefox in a "non-responding" window decoration, and since progress is indeterminate that's really all that can be done. (And for the case that the window manager freezes... that's why I have it displaying a clock with second-level precision, so as to see if it's still progressing. If the clock stops ticking, it's time for a reboot). Alright. So 20 seconds is too short to start thinking about estimating progress; how long do we have to go? I would guess the 5-minute range; after that, it's long enough to start thinking about switching tasks and doing smallish routines such as preparing meals.
For estimated times, the only useful data is the time to completion, e.g. 24 minutes. But this changes frequently; a better idea for long-style progress indicators is to display the time of completion, e.g. that it will finish at 3:00 PM today. Calendars are better than ETA's because they let us plan in advance and think about all of life's routines as opposed to the task at hand. There's some debate over whether a clock time is shorter. In practice, estimates will be inexact, and you'll need a confidence interval. So far I haven't seen anybody implement progress estimation using anything approaching statistics... It's really the only way to display time; the progress bar is useless, because it only shows how much time the user has sunk in already. Even in an early study on progress bars, the suggestion is made to display a simple time estimate instead of a progress bar.
Technology hijacks people's minds; is watching a progress bar tick really the best use of their time? Optimizing for software satisfaction leads to hidden costs in other areas of life.
Corroborating evidence comes from a study of keyboard vs mouse: the mouse was easy to use and was objectively faster, but, due to its low cerebral engagement, was rated poorly by users. On the other hand, the keyboard required remembering complex shortcuts, which took significant cognitive processing. But this time was ignored in subjective evaluations, resulting in users thinking that the keyboard was faster. User satisfaction would be optimized by keyboards. But user productivity would be optimized by mice. But, only in the short term; more recent studies suggest that, with experience, keyboard shortcuts are faster than the mouse. So the choice is best phrased as an investment decision: spend more time now for a better memory later, or walk away quickly and cut your losses.
Obviously, if the software will take several days to complete, it is pretty much impossible to remain engaged and you should always switch tasks. Similarly if it's below the threshold 0.1 seconds then maintaining flow is paramount. But in between, we enter a much more complex world than a simple progress indicator. Questions like: How should users spend their attention? Is frequent task-switching a good idea?
Model for why spinners reduce attention span:
- the animated spinning engages the user's attention
- the default-mode network is thus less active, fires less often, and so it feels as if not much time has passed
- the timing from the motor network fires more often to track the on-screen activity