It's a lot faster to only deal with the on-screen data. You've only got a few hundred sample points displayed at any one time, while there could be several million in memory. Some of the measurements can be particularly processing intensive, e.g. rms amplitude.
The downside is that many measurement functions are going to become very sensitive to the timebase used. E.g. rms amplitude will only be accurate if an exact number of wavelengths are displayed on the screen - e.g. if there are 5.5 wavelengths across the display the rms value won't be accurate as it will be too high or low due to the extra half-cycle. If you were zoomed in on just the peak of the waveform it will give you an rms value that is very high and not representative of the rms value of the entire signal.
Similarly there are limitations with software time domain measurements - for example, if you want an accurate rise time you'll have to expand the rising part of the signal to fill the display. For frequency, if there is less than one wavelength on the display it won't be able to calculate the frequency. If there are too many wavelengths on the display, aliasing occurs and the calculated frequency becomes inaccurate. The hardware frequency counter will be accurate regardless of the timebase, as long as the scope is triggering properly.
At least a few other scopes seem to base their measurements on a larger amount of data than is displayed - i've noticed some Agilent/Keysight scopes maintain an accurate software frequency measurement when at a very small or very large timebase relative to the wavelength.