It's the one most referred to. There's even a Wikipedia page with some explanation but that page doesn't help me:
Unfortunately, citation count != quality metric.
It's a little inconvenient that all the filter constants are fixed and assume a samplingrate of 200 Hz.
The MIT/BIH database recordings have all 360 Hz samplingrate so I downsampled them to 180 Hz for the moment.
You can just re-implement the filters for given transfer function. The poles + zeros are given, so you could do a FIR/IIR implementation designed for a sample rate of 360 Hz.
Or if you want to keep it retro and implement it as a difference equation like the original in the paper, you can just adjust the delays.
Simply adjust the delays in the difference equations to account for the different sampling rate and you should be good to go.
So for the LP given by equations 1,2,3 you'd use a delay of 11 instead of the original 6.
Similar for the HP (equation 4,5,6). You'd use a delay of 29 instead of the original 16.
Maybe tweak by +/-1 to get what you want.
Gain will obviously be different since you're summing a different number of terms, but that is trivially fixed. Scaling done by using ratio of (old_delay/new_delay).
I have a working implementation running on a pc but during testing I discovered multiple issues with that article.
At least it leaves things open for interpretation. For example, if you invert the polarity of the signal, the number of failed
detections (false positives and false negatives) skyrockets. This is not acceptible for a medical device.
Well, I should hope that no medical device is based on something this Mickey Mouse. Then again, stranger things have happened...
A problem I noticed in many scientific papers is that, when they present a new algorithm, they don't provide the full source code
of their implementation. Imho, such papers should be rejected.
No kidding! If those would be rejected as you propose there would hardly be anything left.
Oh no, if I make it too easy to replicate this experiment someone might actually duplicate it, do it more efficiently, and then publish the followup research before I finish mine. Oh no, my pet research project might get outdated because someone else got there sooner. Must ... guard ... details ... to maintain illusion of relevance. <--- pffffrt! So bloody annoying. Research faster!
That said, the situation is slowly getting better...
Anyway, my plan is to opensource my (modified) implementation of the PT QRS detection algorithm on Gitlab but
not before I'm satisfied with the results.
Will be interested to see what you come up with.
As for the "satisfied with results", don't set the bar too high, because the base algorithm is pretty limited. I wouldn't expect it to be all that great at feature detection. I'd expect an LSTM RNN to do better, even for modest network size (read: easy enough to train in an acceptable amount of time on the average GPU).