Arithmetic right-shift operation on negative numbers using two's complement format isn't exactly dividing by a power of two, because arithmetic right shift of \$v\$ by \$n\$ bits actually implements
$$v^\prime = \begin{cases}
\displaystyle \frac{v}{2^n}, & v \ge 0 \\
\displaystyle \frac{v - 2^{n-1}}{2^n}, & v \le 0 \\
\end{cases}$$
That is, most current C compiler implementations yield
(-63) >> 1 == -32 but
-63/2 == -31 (-61) >> 2 == -16 but
-61/4 == -15 (-57) >> 3 == -8 but
-57/8 == -7 (-49) >> 4 == -4 but
-49/16 == -3because they use two's complement format for signed integers, and arithmetic right shift is the same as shifting the storage pattern right with new shifted-in bits getting the value of the original sign bit.
In particular, for
n > 0, we have
(-1) >> n == -1.
The mathematical properties and ease of integer negation and bit pattern inversion (they're supported by all instruction set architectures, and are extremely lightweight operations) is what makes this implementation superior, for both programmers and compiler writers.
In particular, when you have an unsigned integer value with at least one most significant bit unused, negating the value
v, arithmetic right-shift by
n bits, and negating the result again, implements
v' = (
v >>
n) + !!(
v % (1 <<
n))
i.e., increments the result by one whenever any of the shifted bits out were nonzero. (!! is the not-not operator, which yields 0 if the operand is zero, and 1 if the operand is nonzero.) This is surprisingly commonly used during various padding calculations, and the most common way this is implemented is
v' = 1 + ((
v - 1) >>
n)
which yields the exact same results for
v ≥ 1, but has issues with
v=0 (unless the arithmetic right shift is used for the
v=0 case, such that (-1)>>
n == -1).
In both cases one or both of the negations, as well as the increment and/or decrement, can be merged with other operations in the same expression. The difference is only with how
v=0 behaves. (Technically, the negation approach is also limited to half the range of the unsigned type, whereas the increment-shift-decrement supports the full range of the unsigned type except for zero.)
This is also a perfect example of where practical exploration and competition can prove superior to carefully planned design.
Any human language designer would be attracted to define negative right shift as the division by a power of two, because of the symmetries and apparent simplicity. However, only optimization work on actual real-life operations would tell such a designer that no, the desire for symmetry there is unwarranted, and because of practical code and machine implementations, being asymmetric this way following what the actual processors do, yields better performing code. Even the world of embedded computing is way too complex for any single human mind to comprehend, so at some point, the designer has to relinguish some control, and instead let the language evolve on its own, only holding the reins lightly –– like for a horse going home –– gently guiding the direction, instead of dictating directions and considerations.
I think strict focus and careful planning is required at all stages of language design and extension/development.
Strict focus: yes, definitely. You want to keep the overall shape and paradigm (approach to problem solving) clear, and avoid trying to cater to every single whim.
Careful planning: In the latter stages,
extensions do not need to be and should not be pre-planned; they can simply be experimented with as needed. Their inclusion into the standard proper, however, should be very carefully examined and investigated, to see if they
fit, do not break existing code, and do not pose other difficulties to existing implementations.
In contrast, the core language
must be carefully planned beforehand. That part dictates what the approach to problem solving will be, and getting it even slightly wrong will require a new language; fixing such in backwards-compatible manner just does not work well enough (again, Python 2to3).
I don't know if this is what you meant, though.
In my opinion and experience, pre-planning possible extensions by language designers is a no-no for me, because they're limited to their own vision. Allowing nonstandard extensions to flourish, and then
carefully pick what will be included, in the "stable phase" of the language, is absolutely required for the language to stay
current and as useful as possible in the long term. Heck, I even like having competing extensions, because that way a larger fraction of the possible extensions will be explored; and as long as the selection process is careful enough and fitting within the language, I claim will result in better development than any human pre-planners can achieve.
(Of course, if you define that pre-planning that the extensions should be experimentally implemented first, before their inclusion in the language proper, then my needs are fulfilled. However, that also requires that language designers become compiler/interpreter implementers also, or at least work very closely with some. This is the crucial difference in my opinion: whether a language is fully defined as a theoretical construct first, or whether only its core, its idea, its paradigm is, with extensions and additions tested first and examined in practice, then modified, and finally selected for inclusion into the language.)