I debug & test complex algorithms and other code on a PC. Much easier and faster. For example: a PC can eat through half an our worth of audio samples in a blink of an eye. Or how about unit/stress testing a complicated memory allocator or protocol stack? Again a PC can go through many more scenarios in less time than a microcontroller and create a comprehensive report on disk as well.
application-specific hardware?
Indeed. But then battle with the LPC-Link2 hardware debugger Redlink Server software interface which seems to break at the drop of a hat at you have to unplug everything, restart the IDE and/or terminate Redlink Server from Task Manager. Once you're done that a few dozen times, you learn over time how best to avoid it crashing as well as streamlining the inevitable recovery process when it does.
Application specific hardware usually only needs a very thin driver layer and you can use test equipment to verify whether it is controlled right. The other software talks to that through some kind of driver API. That driver API can be modelled so the rest can be tested on a PC. All in all it depends on the complexity of the system how far to take testing on a PC. One of my first big projects was an ISDN protocol stack. I developed & verified that on a PC first using test benches for various situations so when I ran it on a microcontroller (after a couple of months of development) it almost worked first time. I had to chase a nasty compiler bug to get it working.
I debug & test complex algorithms and other code on a PC. Much easier and faster. For example: a PC can eat through half an our worth of audio samples in a blink of an eye. Or how about unit/stress testing a complicated memory allocator or protocol stack? Again a PC can go through many more scenarios in less time than a microcontroller and create a comprehensive report on disk as well.
Well yes, but what about yourapplication-specific hardware?
Application specific hardware usually only needs a very thin driver layer and you can use test equipment to verify whether it is controlled right. The other software talks to that through some kind of driver API. That driver API can be modelled so the rest can be tested on a PC. All in all it depends on the complexity of the system how far to take testing on a PC. One of my first big projects was an ISDN protocol stack. I developed & verified that on a PC first using test benches for various situations so when I ran it on a microcontroller (after a couple of months of development) it almost worked first time. I had to chase a nasty compiler bug to get it working.
If you can get a set of complex device specific peripherals configured correctly together with DMA, timers and interrupts behaving well with your external on board hardware devices first time that is fit for purpose just from reading a data sheet then I lift my cap to you, you are a better man than me!
and as expected ...
http://www.prnewswire.com/news-releases/microchip-technology-to-acquire-atmel-300206644.html
AVRs will come having the Microchip logo in the future ... I don't think there will be any issues with this merger
I found this on a forum ...
"I felt a great disturbance in the EE field, as if millions of Atmel vs. Microchip arguments cried out, and were suddenly silenced"
If you can get a set of complex device specific peripherals configured correctly together with DMA, timers and interrupts behaving well with your external on board hardware devices first time that is fit for purpose just from reading a data sheet then I lift my cap to you, you are a better man than me!I build/verify these parts of code step-by-step using test equipment, toggling I/O pins and collecting statistical data like the number of interrupts, number of bytes transferred, DMA transfers, etc. There is not much you can do here with a debugger because interrupts and DMA transfers often have hard realtime requirements.
In some of my projects I have a mem command to look at memory locations. In my experience it helps to keep these kind of tools in the firmware to do diagnosis later on when a product is installed somewhere. With most firmware you need to be able to debug/diagnose it long after you have the possibility to connect a debugger. IOW: If I feel that I need to look at certain information now to check things I can foretell that I will want to look at that information later on as well. With a debugger you have to point it at a location (out of many...), look at the contents, interpret the contents and then make a decission whether you expect that value or not. In my firmware I typically have a status command which dumps all these kinds of values in one go in plain text with the proper units. Much easier to understand and much easier to see whether some values are wrong or not. Firmware needs a certain ability of doing (self) diagnostics.
With most firmware you need to be able to debug/diagnose it long after you have the possibility to connect a debugger.
I'll do precisely this tonight. For fun, I will _video_ the result,
In applications, the PIC32MX is just as good as any ARM M0 or M3. Unfortunately the powers that be have chosen to take a rather bizarre direction in the software support, using excessive abstraction and unnecessarily implementing a proprietary software framework more at home with a heavy weight OS than a microcontroller. Luckily on the MX series the old MLA library is still available but is deprecated, although it's an organically derived mess. The new MZ devices pretty much force you into using the new framework.
In applications, the PIC32MX is just as good as any ARM M0 or M3. Unfortunately the powers that be have chosen to take a rather bizarre direction in the software support, using excessive abstraction and unnecessarily implementing a proprietary software framework more at home with a heavy weight OS than a microcontroller. Luckily on the MX series the old MLA library is still available but is deprecated, although it's an organically derived mess. The new MZ devices pretty much force you into using the new framework.
That's why I never use vendor frameworks. Ever. It's just not worth the pain and hassle. I'd rather take the time to write my own. Then I know it's done right.
I'd rather take the time to write my own. Then I know it's done right.
In applications, the PIC32MX is just as good as any ARM M0 or M3. Unfortunately the powers that be have chosen to take a rather bizarre direction in the software support, using excessive abstraction and unnecessarily implementing a proprietary software framework more at home with a heavy weight OS than a microcontroller. Luckily on the MX series the old MLA library is still available but is deprecated, although it's an organically derived mess. The new MZ devices pretty much force you into using the new framework.
That's why I never use vendor frameworks. Ever. It's just not worth the pain and hassle. I'd rather take the time to write my own. Then I know it's done right.
Are you really sure you want to write your own USB and Ethernet stacks?
In applications, the PIC32MX is just as good as any ARM M0 or M3. Unfortunately the powers that be have chosen to take a rather bizarre direction in the software support, using excessive abstraction and unnecessarily implementing a proprietary software framework more at home with a heavy weight OS than a microcontroller. Luckily on the MX series the old MLA library is still available but is deprecated, although it's an organically derived mess. The new MZ devices pretty much force you into using the new framework.
That's why I never use vendor frameworks. Ever. It's just not worth the pain and hassle. I'd rather take the time to write my own. Then I know it's done right.
Are you really sure you want to write your own USB and Ethernet stacks?Usually the USB and ethernet stacks aren't vendor provided to begin with. But for simple peripherals like SPI, UART, CAN, I2C, GPIO, timers, etc, etc you are far better off writing your own or using field-proven code from third parties than to rely on the vendor provided libraries.