Anyone can write wikipedia, and google gives the results.
Maybe it is a joke.
Here the solution :
Lesson 1 : bytes are binary.
Lesson 2 : a kilobyte is 1024 bytes.
Now you dont need to mention bytes are binary.
Did you notice how the NIST page says "The NIST Reference on Constants, Units, and Uncertainty" and just below that "International System ofe Units (SI)" and below that "Prefixes for binary multiples"?
That's very unfortunate and misleading, but it results from their page about binary prefixes being shoved into the general category of units.
They clear up the confusion in the following paragraph:
It is important to recognize that the new prefixes for binary multiples are not part of the International System of Units (SI), the modern metric system. However, for ease of understanding and recall, they were derived from the SI prefixes for positive powers of ten.
Meanwhile
This article is about the SI unit of 1000 bytes. For the binary unit of 1024 bytes, see kibibyte.
That's very unfortunate and misleading, but it results from their page about binary prefixes being shoved into the general category of units.
They clear up the confusion in the following paragraph:
It is important to recognize that the new prefixes for binary multiples are not part of the International System of Units (SI), the modern metric system. However, for ease of understanding and recall, they were derived from the SI prefixes for positive powers of ten.
Meanwhile
This article is about the SI unit of 1000 bytes. For the binary unit of 1024 bytes, see kibibyte.
All attempts at diversion aside, NIST's position is pretty definitive. Now that's sorted I have another question. Is the Josephson constant K
J-90 = 0.4835979 GHz/μV or is it K
J = 483597.84841698... GHz⋅V
−1?
We trust and covet NIST traceable calibrations so I gather we also trust how they define common terminology. In NIST we trust.
I stopped trusting NIST when they revealed themselves as complicit with the NSA in undermining encryption standards.
For revealing it or for collaborating?
For revealing it or for collaborating?
They were told so should have known the standard was compromised when they promulgated it. So they knowingly collaborated whether this amounted to deliberate ignorance or something more.
IT people worry about perfection to a single byte, engineer know that no component is perfect and just applies +/- 10-20 percent tolerance. Same deal like with screws. You don't calculate screw to be "just right" because in real world it would be overloaded or tightened too much. Instead you just multiply by safety factor (depending on how critical machine is).
IT people worry about perfection to a single byte, engineer know that no component is perfect and just applies +/- 10-20 percent tolerance. Same deal like with screws. You don't calculate screw to be "just right" because in real world it would be overloaded or tightened too much. Instead you just multiply by safety factor (depending on how critical machine is).
I've asked Microsoft to state the safety factor of their bytes. No answer as of yet.
You expect answer ?, forget it.
This is a true story:
Last night I was designing some product and was looking for RAMs. The catalog showed capacity in kibibytes! I was astounded, but realized that the change was happening.
Then I woke up. It was a dream.
Yes, I dream about engineering
This is a true story:
Last night I was designing some product and was looking for RAMs. The catalog showed capacity in kibibytes! I was astounded, but realized that the change was happening.
Then I woke up. It was a dream.
Yes, I dream about engineering
At least you dream of a better future.
IT people worry about perfection to a single byte, engineer know that no component is perfect and just applies +/- 10-20 percent tolerance. Same deal like with screws. You don't calculate screw to be "just right" because in real world it would be overloaded or tightened too much. Instead you just multiply by safety factor (depending on how critical machine is).
And I thought this thread was a bit dull.
So here, we have a statement that exact numbers in engineering are not NEEDED, because we have to deal with uncertainty. So let's buy a RAM chip with 32KBytes capacity +/-20%. Who cares about exact figures? They are for the birds!
After all, there's "only" a difference of about 2% between 1k and 1K(i). Make that almost 5% between 1M and 1Mi. Now that"s almost 7% between G and Gi. But who cares?
Just use a conservative 80% of the rated capacity of your memories, whatever unit it is given in. You're all set.
IT people worry about perfection to a single byte, engineer know that no component is perfect and just applies +/- 10-20 percent tolerance. Same deal like with screws. You don't calculate screw to be "just right" because in real world it would be overloaded or tightened too much. Instead you just multiply by safety factor (depending on how critical machine is).
And I thought this thread was a bit dull.
So here, we have a statement that exact numbers in engineering are not NEEDED, because we have to deal with uncertainty. So let's buy a RAM chip with 32KBytes capacity +/-20%. Who cares about exact figures? They are for the birds!
After all, there's "only" a difference of about 2% between 1k and 1K(i). Make that almost 5% between 1M and 1Mi. Now that"s almost 7% between G and Gi. But who cares?
Just use a conservative 80% of the rated capacity of your memories, whatever unit it is given in. You're all set.
Don't forget the uncertain number of bits in your bitfield.
Don't forget the uncertain number of bits in your bitfield.
Now we're getting into quantum computing territory...