Author Topic: How To Best Access the Var Data/Get Pgm Flow Insight/Debug in MCU Pgming  (Read 7106 times)

0 Members and 1 Guest are viewing this topic.

Offline SuzyCTopic starter

  • Frequent Contributor
  • **
  • Posts: 792
I am quickly beginning to realize that as my programs become more sophisticated, my ability to create code that does the job  well has just so much now become a continuous time-wasting process of trail and error.

I am working on motor control and other robotics complex problems that require careful tuning.
 
My coding efficiency is limited by my ability to do effectively see my code work, my code testing is limited by my ability to gain instant feedback on how the target MCU is dealing with my code.

 I end up spending too much time guessing what my code is actually doing and then, upon testing, I am too often astonished by the results!

The problem is watching program flow and getting the debugging info..uhmm I think I need lots more feedback.

I have been working with a Hi-Tech DOS C-language compiler and I love this simple code and compile method.

I am now starting to work with mid-range PIC18F chips that are fairly powerful,  but  I still find these parts lack the R/W data memory to store large amounts of data or a way to transfer the information quickly back to my PC to save into a file so I can attempt to monitor closely my code operation even over a short period of program operation. 

I need a way to capture and quickly export a large amount of data var data to get close to having a real-time way to gain insight into my code, so I can analyze the why, what, when of my pgm decisions..I need a quick way to see how my code is working(or not).

At present,  my best idea has been to use only a single pin, do it the RS-232 way, use a UART TX pin of a PIC MCU to provide a single periodic output of vars, use a single pin output gateway to debug because almost all the pins are in use, but even though I can get to see a stream of data as a disorganized collection of bytes with a RS-232 terminal program in Windows, it is really nonsense for me to even to attempt to access and organize this pile of pure digital #@!#@  info effectively.

How do you big boys do this??
« Last Edit: April 18, 2015, 04:14:33 pm by SuzyC »
 

Offline Stupid Beard

  • Regular Contributor
  • *
  • Posts: 221
  • Country: gb
You need a proper debugger. If you're sticking with PICs then get either a PICKit3 or an ICD3, and use MPLab's debugger. That will let you set breakpoints, single step through code, and view all of the PIC's registers/memory.

The ICD3 is better, but the PICKit3 is cheap. There was a thread here recently comparing them if you want more detailed info.

Using the UART is the MCU equivalent of printf debugging on desktops. There are occasions where it's useful, but for the vast majority of work there is no substitute for a proper debugger.
 

Offline SuzyCTopic starter

  • Frequent Contributor
  • **
  • Posts: 792
Thank you Stupid Beard,

I have no problem setting breakpoints in my code manually, but I usually want to monitor the program operation and store the vars in a continuous stream as the program proceeds to be then able to analyze the result.

Single stepping through code is mostly useless when a dynamic process involves the laws of physics that cannot be interrupted but requires monitoring in real time.
 

Offline Stupid Beard

  • Regular Contributor
  • *
  • Posts: 221
  • Country: gb
Ah, sorry, I misunderstood what you were asking.

One option is to keep a circular buffer in memory and write debugging info into that every step through your control loop. Since it's a limited size it will start overwriting itself pretty quickly, but writes to it will be faster than waiting for the UART. When something goes wrong, or maybe periodically if that helps, disable writes to the buffer and dump it out over the UART.

You won't have everything that happened, but you will have the last few iterations of the control loop which can often be enough to figure out what's going on.

You may need to juggle what vars you log in the buffer to make best use of memory. If you don't have enough free memory to be useful, then you can also consider using a different PIC with more memory for debugging purposes.

Maybe someone else will have better suggestions, but that's what I'd try first off.
 

Offline Brutte

  • Frequent Contributor
  • **
  • Posts: 614
I am quickly beginning to realize (..) time-wasting process of trail and error.

You mean like in the 1960s and at the beginning of 1970s?
Quote
I end up spending too much time guessing what my code is actually doing and then, upon testing, I am too often astonished by the results!
I hope you do use debugger (like in 1980s and 1990s), don't you?
Quote
I am now starting to work with mid-range PIC18F chips that are fairly powerful
Now wait a minute.. Are we talking about the same 8-bitters from Microchip? Perhaps you should visit EEMBC website.
Quote
I need a way to capture and quickly export a large amount of data var data(..)
Here on Earth we call it "trace", the activity is called "tracing". Data trace if you trace data and program trace if you trace program (flow).

Quote
At present, my best idea has been to use only a single pin, do it the RS-232 way, use a UART TX pin of a PIC MCU to provide a single periodic output of vars, use a single pin output gateway to debug because almost all the pins are in use,
Then why have you picked a low pin count uC? Doesn't Microchip forge your PIC8-s in bigger LQFP's?

Quote
disorganized collection of (..) pure digital #@!#@  info (..)
That is more or less how trace works, with the difference that today it is a hardware, not software feature (to not interfere with the code). If you want that $%^@ in a more readable form then you have to get a uC with trace and an IDE with trace integration. I think MIPS4K has some trace hardware, not sure about software. Definitely there is no hardware tracing in PIC8s and PIC16s (from Microchip).

So concluding, perhaps you have picked an inadequate uC for your project..
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1634
  • Country: nl
I've written some "insight" code before on which parts of program flow are taken and was also debugged via RS232. This was a very significant overhead and would only work if the protocol speed was very low. I ran the UART at the fastest speed the uC and FT232RL serial cable would allow to work properly (750k - 1M Baud I think).
You certainly do not want it to be printing ASCII data via printf(); because that's very slow on any uC. I made an enumeration of messages to print (so it was just a single byte ID), stuck some raw bytes of variables I wanted to print and wrote a C# program that read the message table (live) from code and monitor the serial port to display and "printf()" them. It worked quite well, but from that point searching for your event can be a pain as well.

The hardware trace is maybe better suited, because they don't take a performance hit on your program. These usually run on dedicated pins of the microcontroller, so just a JTAG connection is not enough. The debug probes also cost quite a bit.

If that's not an option, you *could* in some cases try to port your program to x86 PC and build extensive analysis tools on there. However, if you're into motor controls and other unpredictable systems, writing a test environment for your program may not be very easy. But that can also justify the price of those hardware debug tools.
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1198
  • Country: fi
Adapt your program so it can run on your desktop, with simulated inputs and outputs. Also look at agile development techniques. It's hard to beat a continuous integration server running a robust unit testing suite for rapid feedback.

Offline SuzyCTopic starter

  • Frequent Contributor
  • **
  • Posts: 792
Thanks Brutte, but you offer a lady more criticism than help.
I really don't see how visiting a benchmark site helps??

I know that MCU code development in a Fortune-cookie 500 company in China would be better able to use their large financial resources  to buy equipment and workstations and expensive tools to do this debugging(oops..tracing) job..

However, I just know that people do manage to accomplish many amazing things with not much to work with, even with 8-bit PIC chips, and they might have discovered some helpful knowledge they would be willing to share.


« Last Edit: April 18, 2015, 05:46:04 pm by SuzyC »
 

Online bookaboo

  • Frequent Contributor
  • **
  • Posts: 727
  • Country: ie
Have you considered upgrading to the Real Ice and using the probe kit?

Or if you have a few I/O pins to spare build your own debug board:
- output however many bits you can to the port
- Build a little buffer sub board with a PIC, some memory and a UART
- devise a table, for example if you have 8 bits you can have 256 instructions with data in the next byte (or bytes). Should only take a couple of lines of code to flag up anything you want to monitor.
- The buffer board stores the data and during quiet time spits it out to your PC

It may help to make a small PC program to represent what the PIC is telling you, unless you like looking at code from that looks like the matrix .
« Last Edit: April 18, 2015, 05:56:25 pm by bookaboo »
 

Offline SuzyCTopic starter

  • Frequent Contributor
  • **
  • Posts: 792
Thanks Hans, andersm.

That's what I am doing now, I have routine that consecutively outputs via RS-232 the vars I want to watch that is invoked each second that sends values as 16-bit byte pairs, each set of bytes preceded with a a single-byte id code that identifies the var as an integer or a float that uses 4  consecutive bytes. Each packet contains a time stamp and a program step point flag value.

I am not too ready yet to abandon ship and run out and buy a more powerful Arduion ARM chip that might have the data RAM to store kilobytes of immediate data, but I can see how it would be feasible to attack software problems in this way.
So once I get the code to work,  I could  then export a working code strategy back to an 8-bit PIC.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26868
  • Country: nl
    • NCT Developments
Adapt your program so it can run on your desktop, with simulated inputs and outputs.
+1
That is how I develop complex embedded software as well. A piece of audio processing software can be used to create stimuli and look at the results.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline SuzyCTopic starter

  • Frequent Contributor
  • **
  • Posts: 792
Thanks bookaboo!

Your advice is really helpful.

I think that might be the ticket.

I can stream a very fast RS-232 baud rate very fast to a buffer MCU that then more slowly transfers it to the PC.

I think the real problem is sending data back to the PC, I haven't found the right programming language to create a software tool that can accept a fast formatted, custom stream of data and store the results in an organized fashion and play the results back, that could maybe even graphically show vars v. time.
 

Offline SuzyCTopic starter

  • Frequent Contributor
  • **
  • Posts: 792
Interesting nctnico!

I would like to now more on how to use some sort of audio processing software to do this.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26868
  • Country: nl
    • NCT Developments
I use Cooledit (an ancient piece of software) but any decent audio editor should do. I let it write raw files with 16 bit samples which I can read into a C program as samples.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
Lots of good advice so far. But another way to narrow the gap between your mental model of what the code *should* do and what it *actually* does is to pepper your code with asserts. Here's an example:

Code: [Select]
   ...
   int value;
   value = call_my_function(123);
   assert(value == 999);
   ...

Assert is typically defined as a macro that evaluates expression and makes sure it's true (i.e., non-zero). If assert ever sees a zero/false value, it turns on an LED and locks up the mcu (or does something else that's equally obvious). Here's a basic definition:

Code: [Select]
#define assert(x) if (!(x)) {turn_on_led(); while (1) {}}

The assert macro is pretty simple, but it becomes really useful when you have a debugger hooked up which can tell you which assertion failed. E.g., if you pass a pointer into a function that should always point to something, a simple "assert(arg != NULL)" will tell you when that's not the case. The more assertions you add to your code, the more you'll end up thinking about how the program should be working.

Not only that, but the program will be checking itself as it runs, making sure your mental model is correct. Later on, when you've gotten the bugs out, leave the asserts in the code, but change the definition of the assert macro itself to simply do nothing at all. Conversely, if you need to go back in and add some functionality, turn the asserts back on to make sure that everything's OK.
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1198
  • Country: fi
If you use runtime assertions on the device, you probably want to include some code that does some minimal hardware reinitialization (halts motors, turns off outputs etc.) if the assertion fails. For example:
Code: [Select]
#define my_assert(condition, message) do { \
  if (!(condition)) { \
    TurnOffDeathLazors(); \
    PrintToErrorPort((message), __FILE__, __LINE__); \
    WaitForWatchdogTimeout(); \
  } \
} while(0)

Offline sunnyhighway

  • Frequent Contributor
  • **
  • Posts: 276
  • Country: nl
I usually end up using a logic analyzer like ScanaStudio where i can write my own decoders.
This way I can properly decode the output on the pins into something human readable, but also the input that influences my code.

It should be fairly easy to send out additional debugging info this way too. It is, after all, just another output on a pin.
 

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 5317
  • Country: gb
Adding to what others have said, pretty much I agree, write the code so it'll run on both your desktop/laptop and your target hardware. Trying to debug functional processing which churns through enormous amounts of data remotely is always difficult. I am sure I don't need to tell you this, but make sure you use stdint types like int_16t, int_32t etc as necessary to force data size just in case.

When I need to run something audio-ish on the target, typically I use Goldwave for making and looking at audio files, it will take raw data in binary and ASCII as well as the usual WAV etc. so  you can, if necessary, directly generate and examine data in batch mode to and from your target. You can also generate reproducible signals in Excel, convert it into a big array, and put that into your target's flash if necessary, in a global array for example.

And yes, there's nothing like toggling an LED for almost the ultimate in non-intrusive debugging.

I ran some trace examples on some PICs today using the RealICE debugger.

Using some of the PIC32MX series (the low pin count PIC32MX's don't support it) you can do real time instruction trace without any impact on performance. This requires eight or so pins of your device. You'll need a cable to do it, namely PIC32MX Trace Interface Kit (AC244006). You could also make up your own cable.

For some PICs, some (non-instruction) tracing is available using three different trace methods, namely Native trace, SPI trace and I/O Port trace. Native trace uses the debugger interface so takes no more pins than you'd otherwise use for debugging. SPI trace is a bit quicker, but uses up one of your SPI ports, and I/O Port trace uses 8 consecutive bits of a port. Not all PICs support this trace functionality, you can check which ones do by opening the device page on the Microchip website and looking at the Development Tools section, Emulators and Debuggers tab.

Regarding SPI trace, although Microchip says it's supported on some chips, I've found that those with MSSP rather than a bog standard SPI port don't compile __TRACE or __LOG commands, so you're left with Native and I/O Port tracing. You also need the Performance Pak for SPI trace, which presents an 8 pin 0.1" SIL header (standard 6 pin ICSP plus DAT[SDO] and CLK[SCK] from your chosen SPI port) or else hack something together somehow. The clock and data are outputs from the PIC, you need to know this if you're using a device with peripheral pin select, and you need to set up the PPS yourself.

For the I/O Port trace, you can use the cable that comes with the RealICE, it's terminated with standard 0.1" header female flying wires.

To give you an idea of the performance impact of __TRACE, I ran some tests using the following code measuring the rate of toggling:

Code: [Select]
while(1)
{
__TRACE(0x40);
LATBbits.LATB6=0;
LATBbits.LATB6=1;
}

dsPIC33FJ128GP802, Fcy=39.6MHz based on FRC
None      9.87MHz (__TRACE commented out)
Native   340kHz
SPI      746kHz
Port      1.13MHz

PIC24FV16KM202, Fcy=16MHz based on FRC
None      4MHz (__TRACE commented out)
Native   138kHz
SPI      N/A (MSSP based SPI port so failed to compile)
Port      459kHz

So you need to be careful about where you put your __TRACE statements, the overhead can be quite dramatic in tight code.

I've had the RealICE for a long time, pretty much since it came out, its chip date codes are 2006, together with the Performance Pak and slightly more more recently PIC32MX Trace Interface Kit. This was before the ICD3 was introduced. It's the first time I've used the trace functionality, and to be honest I'm not particularly impressed. Maybe the real time instruction trace might have some value in tough-to-fathom edge cases, but the other trace facilities to me have less value: they're difficult to set up, slow, and two of the three options take up valuable pin and peripheral resources. Even worse, a lot of the trace features are unavailable on many devices, particularly the PIC32MX1xx/2xx which I use extensively.

The RealICE I have is really very old, and occasionally it just hangs with an inadvertent run-jump-catchfire instruction, and I have to let it cool down. I also had to apply a hardware fix on the standard debug adapter to do with it chucking out nasty voltages on VDD (http://ww1.microchip.com/downloads/en/DeviceDoc/ETN-30_MPLAB_REAL_ICE_SOURCING_POWER.pdf). I was not a happy bunny that day! It also has trouble switching between MPLAB X and MPLAB 8.92 which I continually do, despite using the driver switcher. It needs its firmware completely reloading after switching to MPLAB 8.92.

I bought an ICD3 a couple of years ago due to the frustrations I was having with the RealICE as a back up just in case. The ICD3 is much more reliable than my RealICE, I have no problem switching between MPLABs, and it appears to debug just as fast. Until today there was nothing that I did with the RealICE that I couldn't do with the ICD3. But at least I know what the trace features are now, and have some idea of their limitations.

So yes, it's back to toggling an LED for me!
« Last Edit: April 21, 2015, 12:20:00 pm by Howardlong »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf