In many particle physics sensors individual separate events (photons, particles, collisions) with a known minimum interval in between, tend to be easy to handle. Many such sensors have a dead time after each individual event. It is when you want to measure both event counts and individual event properties (wavelength or energy, typically), and want/need to detect
concurrent events, that things become hairy.
For this reason, particle colliders don't even try to measure the collision result particles directly, and instead use many layers of detectors measuring the dissipation of energy, often with a heavy electromagnetic field so that the actual trajectory reveals further information on charged particles. Indeed, each collision event is more like a tree, with each branching point being a subsequent collision or decay event; with the entire tree of events and (intermediate) particle properties forming a system of
fuzzy equations –– fuzzy in the QM sense! –– that is solved as a system, to identify the original collision result particles (even if they themselves left no directly measurable traces in the detectors, for example due to being too short-lived, with the closest layer of detectors being too far from the collision site).
When measuring e.g. radiation spectrum, using a filter that only allows a specific energy (wavelength) at a time, makes things easier. Then, the energy of each photon is known, and a relatively cheap CCD element can count individual events with very short dead times. Adjusting the sensor distance from the source affects the solid angle the sensor covers with respect to the sample, so the event intensity can be adjusted by adjusting the sensor distance.
Calculating the actual activity of radioactive samples using such equipment is where I personally first used elliptic integrals in anger. (Calculating the exact effective solid angle of the detector with respect to an uniform-activity sample volume yields elliptic integrals.) I quickly found out that using a suitable program (with a sufficiently random pseudo-random number generator) to simulate a radioactive sample emitting individual photons is not only faster (in terms of amount of human work needed;
not in raw computation time needed), but also easily visualized and verified. Even a cheap laptop can simulate a few billion photons with planar, conical, cylindrical, and spherical surfaces and filters, within a few seconds to a couple of minutes, so getting a precise enough result with tight enough error boundaries, with sufficient margins, does not take long at all. It is particularly useful when the detector distance from the sample and detector geometry is such that not all points on the sensor surface are directly "visible" from all points within the radioactive sample.
I did end up calculating it both ways, with the results (effective solid angle of the sensor) agreeing with each other, of course.
Which, put another way: digital is a strict subset of analog where we define thresholds for '1' and '0' (or any other values), but if we dial those thresholds down into the noise floor, it doesn't really matter, does it, we're counting statistics of bits equally as well as analog spectra down there. Quantum is a superset of analog, I suppose is the point then. 
I prefer digital:analog ≃ discrete:continuous, because it is more useful when building specific domain knowledge on top. In particular, "to convert to digital" ≃ "discretize".
In the example of using a capacitor discharge to measure a current, or an RC lowpass filter to measure resistance, by converting the current or resistance to a time interval (still in the analog domain), the conversion to digital occurs when the measurement is
discretized.
Now, in quantum mechanics and related physics fields, most things are
quantized. Thing is, that does not mean they are also necessarily
discretized, even though the observables related to the quantized properties
often have discrete values.
Very often the quantized properties can be described using complex numbers. Then, the magnitude, or absolute value, is an integer multiple of some real positive value ("one quantum"), but the direction in the complex plane may vary. This is also why summing two or more such quantized values is rarely equal to the sum of their magnitudes.
In the
double-slit experiment, where emitting individual photons or electrons (all with the same energy) through a double slit (with suitable size and slit separation) yields an interference pattern, the particle location and trajectory is described using a quantum wave function. The double slit acts like a filter, so that the resulting wavefunction is a sum of two wavefunctions. Because these wavefunctions can be described in complex number form, the above note about summing applies, and the resulting wavefunction has the same shape if we had two particles at the same time, one particle through each slit, interfering with each other; just with lower amplitude. Because of how the amplitude of wave functions is defined, the probability distribution of where the particle will hit the display surface is the
square (and not just magnitude) of the wave function at the display surface.
In a real sense, the double slit
quantizes the location of the particle. It does not, however, mean that the particle will pass through one or the other slit; it does not
discretize the path or trajectory. In fact, if you do something that causes each particle to pass through just one of the slits, for example measuring which one it passed through, you lose the interference pattern, and just get a sum of two gaussian distributions. This is because then the resulting wave function is no longer a sum of two wave functions; it is always one of two possible wave functions instead.
If you grok that, you grok the core idea in quantum mechanics, I believe. Thus, I don't think considering "quantum" a superset of "analog" is useful at all.