I understand your point, magic, but I disagree; not because I have a different opinion, but because I think your approach leads to problems when trying to understand more complex mathematical "stuff".

If it was just a matter of opinion or different experiences, different opinions would be just useful, because people do think and learn in different ways.

First, a couple of points:

What I mean is that one definition essentially is an algorithm (fastest or not), the other is some differential equation which I am supposed to know how to solve before even thinking about calculating anything.

Using a ratio to approximate a real number, then calculating the power as the denominator'th root of the nominator'th power of the value,

*is itself an algorithm*. Calculating an N'th root of a value is nontrivial; quite a lot of work, really.

Even if you make the obvious fix and simply define exp as the usual power series

No, I did not, and would not. The intuitive or real-world definition of the exponential function is "the curve which has value 1 at x=0, and slope x at x". The power series is just one way to apply it or

*evaluate it*: to calculate a specific point on that curve.

Now, back to the core disagreement.

OP asked how to intuitively grasp

*a*^{b}, when

*b* is not an integer. They intuitively grasp the case where

*b* is an integer, as multiplying

*a* by itself

*b* times.

If I understood you correctly, your point is to just approximate

*b* with a ratio,

*c/d*, so that

*a*^{b}≃

*a*^{c/d}, in which case

*a*^{b} is approximately equal to the

*d*'th root of

*a*^{c}.

I disagree with that, because it gives an incorrect intuition about the continuity and other properties of exponentials and real powers; intuitions, that will cause difficulty in understanding more in-depth mathematical concepts.

(My objection is similar to the one when teachers tell kids that electrons orbit nuclei like planets orbit the sun. They do not. Electrons do have properties like angular momentum and orbital radius that make the orbit model one that gives a good intuitive grasp of

*the properties of such electrons*, but the fact is, they're delocalized in a region around the nuclei in manner better examined using quantum mechanics, and are definitely not just whizzing around like a rock around a gravity well. It is an analog that works in one specific situation – when considering electron angular momentum and orbital properties – but is a hindrance when trying to understand anything else about atoms and molecules. Physicists like me don't get weirded about this, because we learn to use different analogs depending on the situation, and understand that those are just tools to help us think, and not a representation of reality.)

My own suggestion is basically this:

The "best" one (that makes any further math easier to integrate to ones understanding and mathematical toolbox), is to just consider non-integer powers as an

*"extension"* of the integer ones, with exactly the same rules and behaviour. That is, to understand, that not every mathematical tool has an intuitive real-world analog; that requiring such intuition can hinder ones use of math. In math, it is perfectly okay to multiply

*a* by itself 2.1276352 times, because fractional "numbers of times" are just an extension of integer number of times, and have the exact same properties. The fact that 2.1276352 is not countable – that is, you cannot have 2.1276352 items, because it is not a natural integer – is just completely irrelevant in this context.

The answer to exactly

*how* to multiply something by itself a non-integer number of times, is via a mathematical identity:

*a*^{b} = e

^{b loge a}, where e

^{x} ≝ exp(

*x*) is a curve that has slope

*x* at x=

*x*, and exp(0) = 1; and where log

_{e} *x* is a curve that has slope 1/

*x* at x=

*x*, and log

_{e} 1 = 0. We have several different tools for calculating any point on those curves.

In fact, when you tell a current computer to calculate

*a*^{b} for you (in C,

` pow(`*a*, *b*); in Python,

` `*a****b*; and so on), it actually uses base-two exponent and logarithm:

*a*^{b} = 2

^{b log2 a}. Mathematically, 2

^{x} = e

^{C x} and log

_{2} *y* = C log

_{e} *y*, where C is a constant, C = 1 / log

_{e} 2. It turns out there are very fast and efficient techniques, or algorithms, to calculate base-2 exponentials and logarithms, when numbers are expressed in binary floating-point format,

*m*·2

^{b}. The IEEE-754 standard defines two, Binary32 and Binary64, that use exactly this form, and these are used by almost all current computer architectures (as "float" and "double" real types, typically). Intel x86 and AMD64 processor architectures include machine instructions that do these operations in hardware, and have had these for decades.

See? I understand why one would see the root-of-power approach better, even more powerful, but I don't like it because of how it

*can* affect ones further understanding of math. I like to separate the

*what* and the

*how*, with an approach for understanding the

*what* that isn't likely to bite oneself in the ass later on.

Of course, if one can take both models, and simply switch between them, they're way ahead of either of us already (since we're still here discussing which one to use), and can use such analogs themselves as tools, switching them as they need – if they need such analogs at all. But I suspect that those people are good at math anyway, and don't need our help!