Dave, when you started the video the light was quite bright (I like it). At some time it dimmed. Do you use a spotlight and turned it off for the laptop's screen capturing?
I utterly HATE software religion wars. One simple SPI routine has turned into a war of optimisation, readability, what optimisation "really" means, what compiler is "best" and cock waving comparisons.
No. This is a dictatorship, I'll code the way I like, no one else gets any say in it until it's released
I utterly HATE software religion wars. One simple SPI routine has turned into a war of optimisation, readability, what optimisation "really" means, what compiler is "best" and cock waving comparisons.
You haven't read the postings you are commenting on, have you?
I wonder if anyone if foolish enough to do a software coding video blog?
Dave.
Yup, order of magnitude worse than hardware...
I wonder if anyone if foolish enough to do a software coding video blog?
I utterly HATE software religion wars. One simple SPI routine has turned into a war of optimisation, readability, what optimisation "really" means, what compiler is "best" and cock waving comparisons. This is why I prefer to stick with hardware.
Move on guys. If it works, it works. If you want to rewrite it to please your particular software God, then feel free but don't drag everyone else into it. Linux only got anywhere because one person was in charge.
I'd have to agree with BaW and wonder whether you've actually read this thread.
What else is the, "Ooh, you did it wrong, you should have used a loop!" argument all about?
One reason software blogs in the net are so unproductive is because people perpetually get bogged down over the same questions that have been done to death a thousand times before.
Citation needed.
Look how easy it was for Dave to shift the two null bits from the end to the beginning.
Citation needed.
I really meant the comments/discussion part. There are certainly people out there with interesting things to blog about on software development (even if Joel Spolsky ran out of fresh material a long time ago).
I can't understand why Dave moved the zero bits from the end of the bit banging sequence to the front?
I can't understand why Dave moved the zero bits from the end of the bit banging sequence to the front?
From the formula in the datasheet, the output voltage would have been too high. I wanted 1mV/mA output.
Dave.
The error I am seeing has nothing to do with this.
I want and need 1mV/bit output, because that is what matches my current sense amp output. And to get that with the 2.048V reference and 12 bits I need to use the lower 10 bits with the x2 amp.
Dave.
But.. Are you sure you're not just sending 8 bits to the ADC. The ADC wants two bytes or 16 bits. The first four bits are config bits so you're left with twelve. You're sending two zeros (discard bits) at the front and then 10 bits of data but the last two are being dropped by the ADC. 12 - 2 (at the front) - 2 (at the end) = 8 bits of significant data being sent.
Based on the blog at 20:03 maybe the discard bits should be back at the end after the data bits instead of in front?
If something is wrong the output would change in larger steps with every 2 or 4 increments rather than one small step per increment.
I don't know if anyone pointed this already, but I think it would be nice if Dave made diffident kits for different mains power specifications. (Like EU, US etc.)
Dave is using a 12-bit DAC shifted out MSB first for the test; of the 12 bits, only the lower 10 bits are used.