A typical CMOS digital output phase/frequency detector produces a DC-level output (after integration) that is a function of both phase and supply voltage. For a phase detector implemented in ECL (say an XOR gate to be super-simple) the output would be a function of both phase and the internal clamping level, which is imprecise and variable with temperature for one thing, making that avenue useless.
Ah, but the internal reference level drifts the same way. Probably. Hopefully?
Since you don't care about gain at all, the supply voltage can be all over the place. That's one problem solved. So, for most solutions, errors should be dominated by delays. At least as long as you can track the V_OH / V_OL ranges, if they drift disparately.
ECL example: use the complementary outputs to your advantage. Filter each, and use a differential amplifier for the error amp. (Or comparator for your 1-bit ADC, or whatever.) The thresholds are complementary and errors cancel out, all you need to check is the difference is zero.
Or use additional gates (e.g., a quad XOR?), strapped in static states, to obtain V_OH and V_OL, and treat accordingly. But that's hackier and worse.
With CMOS logic the integrated output can be compared to a trimmed reference level (corresponding to 90 degrees) derived from and thus ratiometric to the supply potential, eliminating that source of error, but CMOS is too slow for accuracy out to 100MHz. Differences in rise/fall and on/off propagation delays cause too much error.
CMOS need not be slow, but it probably has to be done internally. With some help from some latches or counters or whatever, it might be feasible to do it entirely digitally, within an FPGA (the transistors are 1.2V or less -- guessing you can't get discrete CMOS that low and that fast).
Or the DBM, what's wrong with that? Typically, you drive one port at "LO" levels (some dBm's) and the other at "RF" levels (< 0 dBm?). The output is DC (and 2 x f_clk, but that's filtered out) proportional to the RF amplitude.
I don't know too much about phase shift and delay in DBMs. Presumably the delays will be mismatched due to signal intensity, if nothing else. Which will give rise to some residual error as frequency varies. But I should think a few degrees wouldn't be out of the question.
DBMs can be driven symmetrically as well (i.e., equal amplitudes), but the linearity should be worse (because diode Vf is dependent on both signals' amplitudes and phases).
In typical PLL with a digital-output detector the detector output steers in sign (thus controlling the loop) about the phase difference point (typically 0 degrees), so supply voltage variation effecting the logic output level only modulates the detectors gain (and thus the loop gain of the PLL), not the accuracy of the detectors phase/frequency discrimination. Totally different application.
I like to use dual-clocked flip-flops, which produce a square wave output of constant PWM for a given phase shift. Tristate PFDs are good for frequency control in simple loops, but I don't think they're a good idea for a static measurement.
Such a flip-flop is impossible to implement in an FPGA (no synthesizer will accept two clock events), unless you specify it purely as combinatorial logic (in which case, it won't know how to synthesize it well, and the speed sucks).
But it can be implemented in two D-f/f's and a gate, or 8 fundamental gates (i.e. NAND/NOR), which isn't so bad.
Tim