Of course, delays are inevitable. That's basic physics.
As to digital audio, which is discrete, latency will always be a multiple of the sampling period. So that's the link between latency and sampling rate, the rest depending entirely on the implementation, of course. Obviously, a given latency in a given number of samples will be shorter if the sample rate is higher, but that ends there.
On computer audio, it was very common to have either a fixed buffering in number of samples, or user-selectable, but only among a limited number of options. For instance, old DAW software frequently had minimum buffer sizes of 256 samples, or so. In this case, it's obvious the minimum latency would be reduced if sampling rate was higher. Modern audio software on modern OSs have much better latency usually, due to the audio subsystems in general-purpose OSs being much better than they used to, scheduling being better, data throughput being higher, and so on. So the point holds a lot less these days - the limiting factor will be the inherent latency in the OS scheduling, and data throughput, not the buffer sizes per se.
And of course that's just about the latency of audio without any processing. Further processing can add additional delays, thus "latency".
But in the extreme case for which you can achieve a one-sample latency only, then the sampling rate will definitely dictate the latency. In all other cases... it just depends.