what does (Int16) or in xc8 (int16_t) do?
You really do need to understand that, at least somewhat, to port this code, since it's using it all over the place. An expression like:
(int32_t) a
is called a "cast"; it means that regardless of the type of variable "a", it should be treated as an 32bit integer (int32_t) "at this point in the code." Sometimes this involves format conversion - casting a float to int32_t actually converts to an integer, and casting a int16 to a int32 does 16bits of sign extension. (other times it's more of a "re-interpret" the bits you have; for example casting an int to a pointer.)
In this code, the inputs and outputs are all 16bit integers (and sometimes Q15 scaled integers), so when the code does something like:
iAngle = (Int32) K1 * (Int32) iRatio;
it means "even though K1 and iRation are 16bit numbers, I'm doing some calculations that are going to need 32bits to get enough range or precision, so you need to convert everything to 32bits first, and use 32bit math." That means that your attempted conversion:
iAngle = (Int32) K1 * (Int32) iRatio;
// to
Angle = (int16_t) K1 * (int16_t) Ratio;
is particularly wrong. The original says to do calculations in 32bits, and you're saying to only use 16bits. It looks like the original code can be mostly converted simply by changing "Int16" to "int16_t" and "Int32" to "int32_t"
The original idivide code had an additional subtlety (bug, IMO) because it left out one of these casts. In
itmp = (Int16)((itmp * ix) >> 15);
the calculation of itmp*ix needs to be done with 32bits, even though the inputs are 16 and the output of the whole expression is also 16bits eventually. Here, there are some default "promotion" rules that come into play. C (and I guess C# as well), says that intermediate results in an expression are of type "int." On an x86 or ARM (where C#runs) that's 32bits (and the code works), and on an AVR it's 16bits and the code fails. It could be fixed
itmp = (Int16)(((Int32)itmp * ix) >> 15);