To prevent short-circuit shoot-through currents, it's paramount that the switches turn off before there is significant reverse current.
If you think about a diode, it does exactly that, measuring voltage drop over itself, just leaving 0.7V of "safety margin".
Every active circuit must leave some margin, which directly relates to the losses. But in essence, when you see that you soon need to turn off the transistors, you need to do it a bit earlier. At this point, the FETs change their behavior to standard (not schottky) diodes (which they are internally, in this configuration), and the losses rise for a short time.
50Hz makes this circuit easier, because you can do most of your analysis in simple DC domain, and ignore switching time as a delay element.
The amount of safety margin they need to leave depends on their error margin: resistor tolerances, and worst case input offset voltages of the comparators. They recommend minimizing the resistor tolerances down to even 0.1% - I would also recommend choosing a comparator based on worst case input offset voltage + worst case input bias current times Rinput. I don't know how important this optimization is, you'd need to actually calculate it over the worst case variations to find out.
I did something similar when I had to design a discrete efuse circuit (which monitors voltage drop over a MOSFET pair); integrated parts were unsuitable. I made an Excel spreadsheet where I collected all the worst case minimum, maximum values of all components (beware: it can be a combination of parameters, like input offset V and bias I for a comparator, or tolerance + tempco for a resistor), and calculated the voltage thresholds at the combinations of min, max. The end result was that my 50A nominal e-fuse switches off between around 35A and 80A; such a high span was to be expected. The lab prototype performs close to that 50A, but if you want to make a robust product over unit and environmental variation, you need to look at the full range.