values = np.hstack((values/10 , values , values*10))

I'm not sure what you mean by it doesn't use the entire stock if middle is chosen. It creates a list of all possible n element sub-arrays (where n is the number of resistors chosen) then calculates the sum based on the chosen method (parallel or series), stores all the values and filters the ones where the sum is within tolerance of the desired value. I'm not actually "solving a network", I'm doing finite element analysis and filtering the good values from all the possibilities.
OK, I played a bit with both our tools to work out some examples. Here is one:
For example, let's say we have a full stock of E12 values, in the range from 1 Ohm to 1 MOhm.
And let's say I we need three missing values: 6.73k, 67.3k and 673k, to be approximated using parallel networks of at most 2 resistors.
My tool suggests the following:
Formated as: "Request: Approximation = Network (Error)"
6.73k: ~6.733k = 6.8k || 680k (0.039%)
67.3k: ~67.754k = 82k || 390k (0.675%)
673k: 680k = 680k (1.040%)
In other words, it respects the boundary of our stock at 1 MOhm. When we request a value closer to the middle of stock range (6.73k), the tool has more options and thus can find a closer fit. However as the request gets closer to the boundary, options get more limited, and error gets larger.
I then tried your tool using:
print_results("Parallel combinations for 6.73k: " , find_combo(parallel,2,E12,6730,1))
print_results("Parallel combinations for 67.3k: " , find_combo(parallel,2,E12,67300,1))
print_results("Parallel combinations for 673k: " , find_combo(parallel,2,E12,673000,1))
which generates:
Parallel combinations for 6.73k:
0.7% -> 8200.00 39000.00
1.0% -> 12000.00 15000.00
########################################
Parallel combinations for 67.3k:
0.7% -> 82000.00 390000.00
0.9% -> 120000.00 150000.00
########################################
Parallel combinations for 673k:
0.7% -> 820000.00 3900000.00
1.0% -> 1200000.00 1500000.00
########################################
Which means:
1. Your tool does not find the optimal approximation when extra range is available. It suggests 8.2k || 39k rather than more optimal 6.8k || 680k, because it looks only one decade away from the target value. (Of course you can add more decades by adding values*100 and values/100).
2. Your tool feels free to use values that we don't have in our stock: When building 673k it uses 3.9M, which we don't have.
(Both issues are rooted in there being no interface to specify stock boundaries).
Which summarily means that your tool uses the stock that it feels convenient to use, rather than the actual stock available to a project. I personally feel this is important limitation, as I like my actual stock to be respected and fully utilized. I guess your approach may be OK if you make sure to only request targets that are at least 1 decade away from your stock bondaries, and don't mind occasionally missing a better fit.