As far as I understand, "true RMS" implies "AC+DC RMS": sqrt(V_ACrms^2 + V_DCavg^2). Which, while useful in certain situations, is not terribly helpful when taking general measurements. I often only care about the isolated DC and AC components: e.g, AC ripple on a DC rail, or the DC offset of an AC signal. Of course, power measurements for complex signals must be done as AC+DC RMS.
As for why your meters display different values for each signal, despite having the same "RMS" value, is that the AC component being measured is limited to the pass band of the measuring instrument. Most RMS reading meters have maybe 5 or 10 Hz out to 100 kHz or 10 to 30 MHz bandwidth -- depending on if it is a modern DMM or old-school instrument like the HP 3400A/B, Racal Dana 5002, Marconi 2610, Fluke 8922A and similar broadband AC VMs. You also have to consider crest factor (V_pk / V_rms = CF) -- high crest factor signals give many DMMs trouble. In your case, I believe CF is the main issue.
The easiest way to compute the correct true RMS value is to use a "digitizer" (ADC) to sample a signal and perform the RMS calculation using the samples collected over the region of interest. This method even works for fast broadband signals when sampling *below* its nyquist frequency (e.g. Clark Hess 2330 and most Yokogawa power meters). A zero-crossing detector is needed to compute an integer multiple of the waveform's period -- which minimize errors in computing the RMS. The (below nyquist) sampling method statistically reconstructs a fast signal into something within the bandwidth of the measuring instrument. The sampling method is also the best way to compute RMS measurements of slow AC signals (say less than 5 to 10 Hz).
At any rate, each measuring technique (thermal, LOG, sampling, and so on) has its advantages and disadvantages.