That is, we never, ever try to divide anything by zero there. Instead, we can think of the process as looking at how the ratio changes as \$h\$ gets closer to zero, and extrapolate the value the ratio would be if we could calculate it at \$h = 0\$.
IIRC Newton had this concept of using something almost a zero (taste, feels and quaks like zero) as divisor when forming the concept and ideas of modern calculus. Much more intuitive, but unfortunately not mathematically rigorous.
That was fixed and reformed much later with idea of limits.
The way I intuitively grasped it in comprehensive school was by realizing that if one graphed the function
$$g(h) = \frac{f(x+h) - f(x)}{h}$$
(with a specific value of \$x\$, of course), then the limit \$h \to 0\$ is what \$g(0)\$ would be if it could be evaluated. Thus, extrapolation.
If you start that by drawing a curve, picking a point on that curve, and asking the students how they'd find the tangent of that curve at that point, it's very easily graspable concept.
It becomes even more so, if you tabulate or plot the graph \$g(h)\$ with a logarithmic \$h\$ axis (\$h = 0.1, 0.01, 0.001, 0.0001, 0.00001, \dots\$).
....In sixties the duck did born again with hyper-reals.
Eww.
Well, what else can you expect from mathematicians? I only
use apply the stuff they come up with, to solve problems.