I am working with 150 point signal which is sampled at 1Msps. The problem is whenever I design a filter using matlabs' filter design tool, I couldn't make a filter with enough filter settling time thus lose samples.
For example, I want to filter 5kHz. It takes 50 points to filter to settle down. Therefore, 50 points of my 150 point original signal becomes useless. Is this the time to use upsampling method?
No. This settle time is a latency of your filter, which depends on filter kernel size for FIR filter. Higher frequency needs to upsample filter kernel too. If you increase sample rate of signal for 2x times, then your filter kernel size also will be increased for 2x times and filter latency will remains the same.
Filter kernel size requirements depends on how sharp slope you're needs for your filter, how much suppression is required for stopband and how much ripple in a passband is allowed.
For example, if you're using FIR filter, there are several estimate methods to estimate filter kernel size (filter order) requirements, for example Bellanger, Fred Harris or Kaiser estimation methods. I'm using Bellanger method, because it usually shows average values between Kaiser and Fred Harris methods. You're put your filter requirements - pass band, transition band, stop band, the suppression level for stop band and allowed ripple for a passband (how much of the original amplitude can you afford to vary), on the output you got optimal filter kernel size estimation.
Just for example, if we need a FIR filter with stopband=0.5, passband=0.4948, transitionband=0.0052. With 0.1 dB ripple within passband and -140 dB rejection at stopband. Then we have the following filter order estimation:
Bellanger: 1016 taps
Fred Harris: 1222 taps
Kaiser: 1004 taps
In avarage we can use 1024 taps FIR filter for that. And it's latency will be 1024 samples. If you want less latency, you're needs to use filter with less taps and your filter will not be able to meet requirements.
So, there is no way to reduce filter latency with no filter parameter degradation.
Usually filter is intended to work with continuous signal, so if you're needs to work with fixed size buffer, then you're needs to prepare your filter. You're needs to push previous signal buffer into the filter before you push your buffer. For example, you can increase buffer size for a filter settle time at the beginning of the buffer and fill these extra samples with a real signal, then just remove these extra samples after filtering.
What is the main purpose of upsampling? Is my case one of them?
Upsample is required, because when you switch to the higher sample rate, you're needs to fill extra samples with interpolated values and recalculate existed samples for a new sample rate. When you fill extra samples with zero, it will leads to adding high frequency components, this is why low pass filter is required to remove these high frequency components.