Experimenting with problems that interest yourself is a good way to progress.
For example, I am lazy. I want to do things well enough to trust them, so I can do other, more interesting stuff without worrying about falling down because of shoddy earlier work. This means I am well suited for low-level, system/OS/library level work, but not as fast as others when it comes to user interfaces or mockups.
If you are interested in mathematics, then the various methods of calculating Fibonacci numbers is an excellent opportunity for testing different things, and learning how to solve problems via computer programs.
You have the standard recursive algorithm. It is simple to implement, but slow.
You also have the iterative method (remembering the two previous numbers, calculating the next one, and forgetting the oldest number). It is simple to implement, but has to iterate from F1 to the desired Fibonacci number.
The next step up is using an array to hold the previously calculated Fibonacci numbers.
The next step up is using the mathematical properties of the series, namely
$$\begin{aligned}
F_{2 n - 1} &= F_{n}^2 + F_{n-1}^2 \\
F_{2 n} &= \left(2 F_{n-1} + F_{n} \right) F_{n} \\
\end{aligned}$$
which lets you calculate any Fibonacci number in logarithmic time and space complexity.
The next step up is direct calculation of any Fibonacci number, using
$$F_n = \frac{\varphi^n - (-\varphi)^{-n}}{\sqrt{5}}$$
and if using truncation (as C uses for integers, for example),
$$F_n = \left\lfloor \frac{\varphi^n}{\sqrt{5}} + \frac{1}{2} \right\rfloor$$
where $$\varphi = \frac{1 + \sqrt{5}}{2} \approx 1.61803$$
The next step up is to realize the limits of the hardware number types (integer and floating-point types), and to write your own bignum (for the logarithmic, integer-based method) or arbitrary-precision (for the direct evaluation) implementation. At that point, we're writing GNU Scientific Library style code, and well and truly deep into programming, typically with a decade or more practical experience. So, this is not a series of exercises, but rather a simple problem you can attack again and again, whenever you feel the need or inspiration to develop or hone your skills.
The same goes for most practical problems as well. One of my old-timey favourites is the venerable sort command, which in general terms accepts input as lines, and outputs them in sorted order. It is an excellent real-world test for many sort algorithms, because it is a real-world task. It shows very clearly how some algorithms work best when the data is almost sorted, how some algorithms have weak points (like if the input is in opposite order to output; rare but occasionally occurring situation). Very informative.
The reason I like it is because when you overcome the need to optimize code, and get to the stage where programs are just tools you wield as easily as you might wield a fork or a spoon, you start looking at what to optimize to get the most bang for your effort; you're essentially optimizing your optimization efforts, for the results to be most useful to you or your users. Because mass storage is still at least one order of magnitude (10x) slower than working memory (RAM), it turns out that if we want to minimize the real-world time it takes for the data to be output -- i.e. the time the human running the command has to wait, if it is run interactively --, it is better to waste some efficiency in order to increase the throughput. You do this by reading each line into a sorting abstract data type; usually a tree or a heap. This way, more CPU time is used in sorting the data, but because each line is sorted as soon as it is read, and the sorting takes typically less time than reading the line into working memory, the entire job takes less real-world time to complete. The CPU just worked a bit harder, that's all. (This is obviously not true if the data to be sorted is already in working memory; then, it would be best to just copy the data over, and then run the most efficient sorting algorithm on it.)
Sorting data in itself is a tricky subject, because in most real world cases, the actual amount of unique inputs is limited. This means that sorting ints, longs, floats, or even doubles is perfectly possible to do in linear time, even though it has been proven that mathematically, logarithmic time is optimum. The reason is that radix sorting does its work in linear time if there is a limited number of possible input values. So, given enough data, it will beat all other sort algorithms. Currently, the breakover point is somewhere around millions of doubles, with typical common sort algorithms like QuickSort being faster for smaller amounts of data. Again, if you have useful information on the real-world data to be sorted, you can make the typical cases very fast, with some other cases slower than optimum. This means that a very good programmer (or "software architect", designer, problem-solver, or whatever; not the code monkey, but the person who decides how the program solves the problems it needs to solve), has to be aware of 1) real-world needs, 2) possible algorithms and tools (to have as wide selection as possible, instead of treating all problems the same), and 3) of the hardware possibilities.
But even more importantly, a good programmer is always learning more. Because you never know enough.
It all starts from interest in problem solving. Anything and everything you can learn about that, will help. I'm not talking only about problem solving via computer programs, but even oddball tricks like how to solve a Rubik's Cube in an orderly fashion is really just an algorithm, and understanding how different algorithms work, is like getting new tools in your toolbox. Then you use those different tools on various problems you are interested in, to see how they work. Real-world experience beats theoretical knowledge every time, so expect to do a lot of this, basically for the rest of your life. Fortunately, it is both fun and rewarding!