If you are talking about complex processors, there are often 6-12 voltage buses. This means that the power for those subsections come from those pins. We have a design with an Atom processor and I think we have 10 voltage buses, many are at 1.5V but for various sub systems on the processor.
The problem that chip makers are FINALLY solving is the location of power and ground right next to each other. For the longest time, there was a group of pins, balls, pads that were VCC and a group that were VSS and a decent gap in between them. This made decoupling a nightmare and loop currents a big problem. Current flows in fields and the fields had to spread out far to big to get the job done. This meant more EMI and more noise everywhere in the circuit.
So to try to answer your question, the current is pretty much shared over the pins on the same bus. Looking at an i7 data sheet, while the voltage is only 1.5V, the max current is 145A!
So we have the problem of just getting the power in there without melting everything. And also getting the power in there without dropping a ton of voltage via resistance. We also have to have a low inductance path, or we will starve the processor when it has a rapid power request. All of these can be achieved with parallel feeds. The inductance of the power path of a single ball or pin is pretty high. But when you have 100 in parallel, it is a tiny fraction. Just like the effect of sinking multiple vias to make their overall inductance lower. At the response times we are now working with, even the power inductance going to the silicon becomes a factor. 100 little connections actually perform better than one big one.