Author Topic: Can't understand an algorithm (Pan-Tompkins QRS detection)  (Read 444 times)

0 Members and 1 Guest are viewing this topic.

Offline Karel

  • Super Contributor
  • ***
  • Posts: 1561
  • Country: 00
Can't understand an algorithm (Pan-Tompkins QRS detection)
« on: March 30, 2020, 03:33:55 pm »
Is it me or is there an error in the article?

In the section "Adjusting the Average RR Interval and Rate Limits" is written that
the RR LOW LIMIT, RR HIGH LIMIT and RR MISSED LIMIT are derived from RR AVERAGE2.

But RR AVERAGE2 is updated only with RR intervals that are within the RR LOW LIMIT and RR HIGH LIMIT.
So, if suddenly the RR interval changes to a value outside of the limits, the limits will never be adjusted.

My impression is that these limits should be updated from RR AVERAGE1 instead.

What do you think?
« Last Edit: March 30, 2020, 04:15:07 pm by Karel »
 

Offline mrflibble

  • Super Contributor
  • ***
  • Posts: 2024
  • Country: nl
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #1 on: April 01, 2020, 01:33:50 pm »
Is it me or is there an error in the article?

In the section "Adjusting the Average RR Interval and Rate Limits" is written that
the RR LOW LIMIT, RR HIGH LIMIT and RR MISSED LIMIT are derived from RR AVERAGE2.

But RR AVERAGE2 is updated only with RR intervals that are within the RR LOW LIMIT and RR HIGH LIMIT.
So, if suddenly the RR interval changes to a value outside of the limits, the limits will never be adjusted.

My impression is that these limits should be updated from RR AVERAGE1 instead.

What do you think?

Just read it, and I see no inconsistencies. Or more precisely, ASSUMING a valid start condition, I see no problems.

"So, if suddenly the RR interval changes to a value outside of the limits, the limits will never be adjusted."

And? The definition is the definition is the definition. If following your example "suddenly the RR interval changes to a value outside of the limits, the limits will never be adjusted", the signal subsequently exhibits the behavior as given in my-this-here-example --> "after the previous sudden excursion the RR interval returns to a boring median value", then the RR AVERAGE2 will have fullfilled its purpose.

An alternative would be: get two other papers/books/whatever on the topic at hand. Waaaaay back I've learned that having just 1 source of information as default learning plan is a bad plan. You are bound to encounter errors, author pet opinions, learning style mismatches, etcetera in the long run. When learning something new I always try to get at least three "decent" sources, because, well, humans. Spending a large amount of time just to decode someone's exposition is something for an era where the average person was lucky to own a book at all (i.e, the past). Besides, this paper is from 1985. Should be plenty of similar material out there. With a bit of luck there's a different paper out there that fills in some of the blanks...
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5447
  • Country: fr
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #2 on: April 01, 2020, 06:01:27 pm »
As I see it, the limits get adjusted from RR_AVERAGE2 for a purpose. They are refined at each iteration. What I haven't got clearly (didn't read/ though about it enough) is how the whole thing gets "reset" - otherwise you may expect the limits to stay put or shrink at every iteration, so the possible range would get narrower and narrower. It needs a way to get the other way I guess. Again didn't study that enough, the answer may be obvious.
 

Offline mrflibble

  • Super Contributor
  • ***
  • Posts: 2024
  • Country: nl
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #3 on: April 01, 2020, 07:09:52 pm »
What I haven't got clearly (didn't read/ though about it enough) is how the whole thing gets "reset" - otherwise you may expect the limits to stay put or shrink at every iteration, so the possible range would get narrower and narrower.

If I understood it correctly (big IF  ;D ) then the selection window for RR_AVERAGE2 can grow just fine. It looks like the relative window size is chosen such that: 1) it can grow larger in response to a slowly changing RR_AVERAGE2 and 2) it will remain unchanged in the face of short term perturbations aka heart arrhythmia.

The selection criterion is  0.92*RR_AVERAGE2 <= RR <= 1.16*RR_AVERAGE2. So if the difference is within -8% / +16% of the current RR_AVERAGE2, it will get selected and included in the update for RR_AVERAGE2.

Besides, in the healthy situation, there is a defined "reset" as per equation (29).
If the 8 most recent intervals are all within the defined interval, then heart rate is labeled as stable.
And in that (stable) case RR_AVERAGE2 value is updated to RR_AVERAGE1.

So for normal heart, with a skipped beat every now and then --> model works as intended. And using this as an example ... healthy heart with slowly increasing RR_AVERAGE1 --> slowly increasing RR_AVERAGE2 --> slowly widening of selection window.

But it's not exactly fool proof. I can think of a bimodal oscillator that will neatly fsck things up. No idea if that artificially constructed signal has any mapping to human physiology though. :-// For the exact case, probably not. But for the "close enough" case, I would not be surprised at all if nature did come up with an intriguing failure mode...
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 1561
  • Country: 00
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #4 on: April 01, 2020, 11:26:18 pm »
Thank you for trying to help. I guess I wasn't completely clear. What I meant was:

"So, if suddenly the RR interval changes to a value outside of the limits and stays there, the limits will never be adjusted."

An example:

Let's say we have an ECG recording that starts with a stable heartrate at 60 bpm.  low limit is set to 55.2 bpm and high limit is set to
69.6 bpm. The heart rate slowly moves up to 75 bpm and stays there for some time. The limits are adjusted accordingly because
it happens all slowly.
New values at this point are 69 bpm for low limit and 87 bpm for high limit.
Now, suddenly, the heart rates drops quickly back to 60 bpm (these things happen!). The high and low limits are never updated
as long as the heart rate doesn't go up to at least 69 bpm for some time.
 

Offline mrflibble

  • Super Contributor
  • ***
  • Posts: 2024
  • Country: nl
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #5 on: April 02, 2020, 01:12:43 am »
The RR-interval is the time interval between the two R's of adjacent QRS complexes if I understand correctly. So the heart rate is the reciprocal, but I understand your meaning.

If the situation that you describe happens often enough to be an issue, then the two rather empirically determinded constants of 0.92 and 1.16 should be adjusted. But judging by Table 1, this apparently not a big enough issue for their dataset of 48 tapes.  :-//

But hey, it's a vintage biomedical algorithm. BTW, do you happen to know what the common detection method is these days? Something like multi-level wavelet transform maybe?

Anyways, should you be so inclined, you could do a quick & dirty implementation of the transfer functions in that QRS paper, and use it on this annotated dataset:
https://www.physionet.org/content/mitdb/1.0.0/

Would be interesting to see how good or bad it performs.

PS: Note that this dataset from the MIT-BIH Arrhythmia Database is the same as the one used in the paper. Found it referenced here: https://www.codeproject.com/articles/309938/ecg-feature-extraction-with-wavelet-transform-and
« Last Edit: April 02, 2020, 02:03:42 am by mrflibble »
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 1561
  • Country: 00
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #6 on: April 02, 2020, 07:18:48 am »
The RR-interval is the time interval between the two R's of adjacent QRS complexes if I understand correctly. So the heart rate is the reciprocal, but I understand your meaning.

Correct.

If the situation that you describe happens often enough to be an issue, then the two rather empirically determinded constants of 0.92 and 1.16 should be adjusted. But judging by Table 1, this apparently not a big enough issue for their dataset of 48 tapes.  :-//

But hey, it's a vintage biomedical algorithm.

It's the one most referred to. There's even a Wikipedia page with some explanation but that page doesn't help me:

https://en.wikipedia.org/wiki/Pan%E2%80%93Tompkins_algorithm

BTW, do you happen to know what the common detection method is these days? Something like multi-level wavelet transform maybe?

Anyways, should you be so inclined, you could do a quick & dirty implementation of the transfer functions in that QRS paper, and use it on this annotated dataset:
https://www.physionet.org/content/mitdb/1.0.0/

Would be interesting to see how good or bad it performs.

PS: Note that this dataset from the MIT-BIH Arrhythmia Database is the same as the one used in the paper. Found it referenced here: https://www.codeproject.com/articles/309938/ecg-feature-extraction-with-wavelet-transform-and

It's exactly what I'm doing now these days during the corona virus lockdown at home.
It's a little inconvenient that all the filter constants are fixed and assume a samplingrate of 200 Hz.
The MIT/BIH database recordings have all 360 Hz samplingrate  :palm: so I downsampled them to 180 Hz for the moment.
I have a working implementation running on a pc but during testing I discovered multiple issues with that article.
At least it leaves things open for interpretation. For example, if you invert the polarity of the signal, the number of failed
detections (false positives and false negatives) skyrockets. This is not acceptible for a medical device.

A problem I noticed in many scientific papers is that, when they present a new algorithm, they don't provide the full source code
of their implementation. Imho, such papers should be rejected.

Anyway, my plan is to opensource my (modified) implementation of the PT QRS detection algorithm on Gitlab but
not before I'm satisfied with the results.
 

Offline mrflibble

  • Super Contributor
  • ***
  • Posts: 2024
  • Country: nl
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #7 on: April 02, 2020, 04:39:48 pm »
It's the one most referred to. There's even a Wikipedia page with some explanation but that page doesn't help me:
Unfortunately, citation count != quality metric.

It's a little inconvenient that all the filter constants are fixed and assume a samplingrate of 200 Hz.
The MIT/BIH database recordings have all 360 Hz samplingrate  :palm: so I downsampled them to 180 Hz for the moment.
You can just re-implement the filters for given transfer function. The poles + zeros are given, so you could do a FIR/IIR implementation designed for a sample rate of 360 Hz.

Or if you want to keep it retro and implement it as a difference equation like the original in the paper, you can just adjust the delays.
Simply adjust the delays in the difference equations to account for the different sampling rate and you should be good to go.
So for the LP given by equations 1,2,3 you'd use a delay of 11 instead of the original 6.
Similar for the HP (equation 4,5,6). You'd use a delay of 29 instead of the original 16.
Maybe tweak by +/-1 to get what you want.
Gain will obviously be different since you're summing a different number of terms, but that is trivially fixed. Scaling done by using ratio of (old_delay/new_delay).

I have a working implementation running on a pc but during testing I discovered multiple issues with that article.
At least it leaves things open for interpretation. For example, if you invert the polarity of the signal, the number of failed
detections (false positives and false negatives) skyrockets. This is not acceptible for a medical device.
Well, I should hope that no medical device is based on something this Mickey Mouse. Then again, stranger things have happened...

A problem I noticed in many scientific papers is that, when they present a new algorithm, they don't provide the full source code
of their implementation. Imho, such papers should be rejected.
No kidding! If those would be rejected as you propose there would hardly be anything left. :P

Oh no, if I make it too easy to replicate this experiment someone might actually duplicate it, do it more efficiently, and then publish the followup research before I finish mine. Oh no, my pet research project might get outdated because someone else got there sooner. Must ... guard ... details ... to maintain illusion of relevance. <--- pffffrt! So bloody annoying. Research faster!

That said, the situation is slowly getting better...
 
Anyway, my plan is to opensource my (modified) implementation of the PT QRS detection algorithm on Gitlab but
not before I'm satisfied with the results.
Will be interested to see what you come up with.  :) As for the "satisfied with results", don't set the bar too high, because the base algorithm is pretty limited. I wouldn't expect it to be all that great at feature detection. I'd expect an LSTM RNN to do better, even for modest network size (read: easy enough to train in an acceptable amount of time on the average GPU).
 

Offline mrflibble

  • Super Contributor
  • ***
  • Posts: 2024
  • Country: nl
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #8 on: April 02, 2020, 04:50:17 pm »
Stupid curiosity!  >:(

Did a quick check. See attached pdf for a comparison of the various methods.
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 1561
  • Country: 00
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #9 on: April 05, 2020, 04:43:54 pm »
Stupid curiosity!  >:(

Did a quick check. See attached pdf for a comparison of the various methods.

I already read that document but thanks anyway  :)

Today, I managed to reach the same failure rate as the authors of the article claimed. Mission accomplished.  :-+
 

Offline mrflibble

  • Super Contributor
  • ***
  • Posts: 2024
  • Country: nl
Re: Can't understand an algorithm (Pan-Tompkins QRS detection)
« Reply #10 on: April 07, 2020, 09:36:51 am »
Today, I managed to reach the same failure rate as the authors of the article claimed. Mission accomplished.  :-+
Nice! Always get that feeling of accomplishment when you can actually reproduce the results. :-+ Because while reading the paper it often seems simple enough, but then it turns out to be not-so-simple when you actually try it. ;D
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf