Yeah, I expect the jaggedness of the data is due to random errors of some sort. But when presented with a problem like this I usually turn to a tool that I got for free a while ago.
A US professor had created a methodology not for fitting a data set to a given function but for finding an optimal function to fit through the data set. He was the speaker in a fascinating YT vid on the subhect. He called it Formulize and offerred it for FREE to all comers but without a couple of enhancements. Then changed the name to Eureqa and kept improving it. That's when I got a copy of it for WIN 7. In the last 2-3 years his outfit must have been sold to another that markets it under the name of Nutonian and they now still sell copies of it for cash only - sigh, but a 30-day free trial is still available.
Anyway, I put your 32 point dataset into my copy and this is what it came up with in 10 minutes. Errors were input as percent figures. First your data,
Then the result. The top right hand chart is a comparison of the error values (red line) against the Vin data as compared with the generated function (blue dots). The function displayed is the best fit of several functions that it tried in the list to the left.
The function shown is:
0.0923823416426414 + 0.00055657868106123*Vin^2 + 3.64813221897325e-11*Vin^5 - 0.0337946466493136*Vin - 2.31333395165299e-16*Vin^7 - 3.06788272866507e-6*Vin^3 = Error
It has a "R^2 goodness of fit" of 0.8809998, a correlation coefficient of 0.94465699 and Mean Square Error of 0.0058587105
I limited the function generated to constants, arithmetic, trig, powers, log and root terms. I have tried allowing others including inverse trig and hyperbolic trig, logical and squashing functions but produced no better results in 10 min.
Even if this is useless it at least allowed me to play with this tool and to kill half an hour.
Cheers