EEVblog Electronics Community Forum
Electronics => Projects, Designs, and Technical Stuff => Topic started by: Zucca on October 08, 2018, 10:40:30 pm
-
I want to upgrade my father's chinesium ebike from stupid AGM to Li ion battery. It's a crappy bike but it was a present...
After the chinesium charger melted the 3x12V (already replaced once) batteries, I throw those stupid heavy (my mother don't even want to touch the bike because it's too heavy) AGMs with the charger out of the window.
So I got an used ebike li ion battery from evilbay for a very reasonable price, plus I got my self another used ebike lion charger again stupid cheap. Cheap IS a must, I am not gonna to spend money on a sick horse. :horse:
The battery I got it's a Panasonic NKY467B2 36V 15AH 540Wh, and the charger it's 42V 2A.
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542678;image)
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542684;image)
In the description the charger gauge reported 4/5 state of charge, so far so good.
Today it arrived and I started to play with it. Battery voltage was 22V, freaking 22V but the gauge always shows 4 of 5 led on... so I can't trust those leds. Oh well...
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542708;image)
I started the rescue mission.
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542702;image)
First I open the battery... very well done. It's a 900€ (new!) battery, so I was expecting something good inside.
The package is a potting fest inside a nice plastic bag with am air vent (goretex?). there are two main connectors: one for the bike/big charger and one for a small charger unit.
Here some pics
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542654;image)
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542666;image)
smaller charger connector
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542672;image)
I took the battery from 22V to about 30V with 100mA charging current. Then I moved with about 1C to 36V... now it has just reached 36V. Of course I check the single cells voltage and they are always pretty close the others. yes. Who knows if the battery is still good... meh.
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542696;image)
It's a 10S3P configuration BTW, I am almost sure.
Now I have to understand how that BMS works and rev-eng it...
Do you guys knows what these letters means on the bike/big charger connector?
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542660;image)
+ and -, I know it
C: the charger port
D: ? (Drive?)
S: ?
Let's see if my father's ass will burn on that upgraded bike. Safety first... :-DD
Any help is appreciate. More to come.
-
Nice test bench. BMS stands for battery management system. BMS is your friend as it will shut the entire battery pack down if any of the lithium batteries ventures below 3 volts discharging and stop charging at 4 volts. This is good as discharging a lithium below 3 volts is throwing the dice. You can get away with it a few times but once too often and the lithium is toast. I would leave the BMS alone. The bike will run fine on 38 volts so why change it , a bird in the hand...
On another note if the battery pack was stupid cheap then chances are there is a bad lithium which will limit the bike range when the BMS shuts it down. With BMS you are only as good as the weakest lithium battery. The way to tell is cycle the battery pack through a charge and discharge. Towards the end of the discharge measure the individual lithium batteries. If most say 3.5 volts but 1 says 3.1 then you found the bad guy. With a little luck they will all say the same voltage + - 200 m volts so you are good to go. :-+
I was going to leave at this but I have to know. What is the square box in the lower right corner of the test bench with the round circle on it?? A compass ?
-
I was going to leave at this but I have to know. What is the square box in the lower right corner of the test bench with the round circle on it?? A compass ?
Do you mean the Nespesso coffee box?
Thanks John to stay with me in this madness. I charged 1C up the battery to 40V and the 10 cells were all at about 4.00V so cool :clap:.
No temperature rising whatsoever, it looks good.
I don't like that BMS for two reasons:
1) The state of charge was showing 4/5 full with a 22V battery voltage. It can't be right.
2) I think there are MOSFET
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=542714;image)
https://www.st.com/resource/en/datasheet/stp260n6f6.pdf (https://www.st.com/resource/en/datasheet/stp260n6f6.pdf)
to enable the + and the charging port... and I think that's why I didn't have 40V on the external bike connector between + and -. This means I need to rev eng also before to attach the battery charger.
Best case the BMS is stuck in error and power cycle with a charged battery will do.
Now I am logging the self discharge voltage of the battery with that BMS attached... will see.
-
Ok, instead of rev eng I will remove completely that BMS and go with these cheap cheap chinesium toys
BMS, I hope 20A cont./30A peak are enough:
(https://i.ebayimg.com/images/g/NOYAAOSwj~9bhC8v/s-l1600.jpg)
eBay auction: #263901952562
Battery Gauge:
(https://i.ebayimg.com/images/g/Um0AAOSwUPZbMQAf/s-l1600.jpg)
eBay auction: #162766012576
and done deal.
PS: Don't you love the technical description of those cheap toys?
I think I will connect the BMS like this:
(https://i.ebayimg.com/images/g/WC0AAOSw6FZbpinv/s-l1600.jpg)
cristal clear, thank God there are just + and - in a battery...
How many time they put a - instead of a +? :horse: |O
(https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTx3jNdNqqxmTn6I_RpIT7sZp4GihLD0jVPVci-ATwaNb1od6Yj)
EDIT: This makes more sense:
(https://ae01.alicdn.com/kf/HTB1msThQVXXXXanapXXq6xXFXXXk/Darmowa-Wysy-ka-36-V-bateria-litowo-jonowa-obwodu-ochrony-10-S-36-V-37-V.jpg)
-
Keep the original BMS! It will be way better than a replacement. The state of charge usually is determined by current going in & out and it will be tuned for the particular cells used. If you do one discharge / charge cycle the state of charge will show the right value. Unlike other chemistries you can not use the voltage for the state of charge in Li-ion batteries. The original BMS probably shows a wrong value because the cells self-discharged.
-
If you do one discharge / charge cycle the state of charge will show the right value.
Hi nico, thanks for jumping in. The problem I don't know how to properly charge or discharge using that BMS, I don't have the V Bat voltage accessible (on both charge or discharge positive cables) on those connectors because the BMS is cutting them off somehow. I could buy the 200€ charger for it and give it a go, reverse eng it and ship it back... meh...
Not sure if it is worth it.
Surely it would be better to keep it, but that crap bike will not be used for critical missions: 10 minutes travel in the town and come back with fresh bread. Who cares if the SoC is not precise.
What I need is a way to see if the charger has done his job: 42V? Yes then full, dad you can disconnect the charger.
On top of that a proper reverse eng would require a lift of that board to see what is on the other side... so I will destroy the potting anyway... and hoping to understand something.
14€ and job done. Surely chinesium quality job, but it's a chinesium bike. :-\ ::)
-
If you charge to 40V then it will be OK. A Li-ion charger will use a current limit until it reaches a certain voltage. Form there the voltage is kept constant. The charge ends when the current drops below a certain threshold. Say 200mA for this kind of battery.
-
According to my tests, here the 10€ used li ion ebay ebike charger I have (from my memories, I have a written notes at home):
22V<VBat<40V: 2A. (I don't like the 2A @<33V, meh)
40V<VBat<42V: reducing almost linearly the 2A
42V: 0A
I think we are in business.
Only missing pieces in the puzzle are the thermal analysis and the hope that Max 20A cont., 30A Peak 36V will be enough for this sick horse:
(https://image.made-in-china.com/43f34j00iyptEMofhKkC/Red-36V-Lead-Acid-Battery-City-Style-E-Bike-JSL005A-3-.jpg)
which is very similar to mine... oh according to the www those are 250W bike, so 250/36= 7A :-+
:popcorn:
-
Back home now, the self discharging test looks promising
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=543479;image)
Moreover here the cheap ass li ion ebike charger tests results:
| Vout [V] | Iout [A] |
| 20.5 | 2.248 |
| 41.75 | 2.248 |
| 41.80 | 1.988 |
| 41.85 | 1.707 |
| 41.88 | 1.528 |
| 41.94 | 1.2 |
| 41.97 | 1.021 |
| 42 | 0.831 |
| 42.060 | 0.5 |
| 42.1 | 0.261 |
| 42.1249 | 0.1 |
| 42.140 | 0 |
meh.. I can live with it for a 14Ah battery 36V.
-
Unlike other chemistries you can not use the voltage for the state of charge in Li-ion batteries.
Completely the opposite.
On li-ion (except maybe LFP), the voltage is way better estimate for SoC than on almost any other chemistry. On NiCd, NiMH, lead acid, SoC estimate based on voltage is almost useless, because the discharge curve is almost a flat line: the voltage difference between 80% and 20% may be just 5%, or even less.
This is simply because li-ion has the open-circuit voltage range running from about 4.2V (100%) to about 3.4V (0%), and on most li-ion chemistry variants, there is no "flat" part on the line. While the curve isn't linear, it isn't a "straight line with only a sudden drop" either.
On li-ion, the voltage difference between 80% and 20% is typically around 15%, making voltage measurement feasible way to approximate the SoC. It's often accurate enough, especially with a non-linear lookup table. Especially older consumer stuff simply used voltage. Typically "good enough", expect +/-20% accuracy when somewhat calibrated for the particular cell type. This is way better than a poorly designed coulomb counter which can totally fail and easily produce 100% error.
But depends on the exact li-ion chemistry. You can look at the actual curves collected, for example, here:
https://lygte-info.dk/review/batteries2012/Common18650CurvesAll%20UK.html
LFP is an exception, with its almost flat discharge curve.
-
Unlike other chemistries you can not use the voltage for the state of charge in Li-ion batteries.
Completely the opposite.
On li-ion (except maybe LFP), the voltage is way better estimate for SoC than on almost any other chemistry. On NiCd, NiMH, lead acid, SoC estimate based on voltage is almost useless, because the discharge curve is almost a flat line: the voltage difference between 80% and 20% may be just 5%, or even less.
I think you got this completely the other way around. The charge/discharge curve on most Li-ion cells is a flat curve for a big part of the cycle and the temperature has a much bigger influence on the cell voltage than the charge level. IOW: the cell voltage is completely useless to determine the amount of charge (SOC). This is the reason why quality BMSses use a charge gauge which is calibrated for the specific chemistry of the cells reaching accuracies of 1%. One of my customers makes Li-ion battery so I know a thing or two about Li-ion cells and BMSses.
-
Please check the link I gave you with the actual curves. They have a lot! You can easily see they are not flat. My measurements give similar curves. This is again one funny discussion, it's always a strange feeling when someone comes from around the corner and tells something you have been doing on a daily basis for years just fine is "impossible" or "does not work" |O. But let's be more specific here, I hope this info helps anyone:
I have designed and built battery measurement & cycle testing equipment and performed probably millions of test cycles on at least 30 different cells of LFP, NMC, NCA, LCO and LMO chemistries. I have designed and small-scale commercialized a few BMS systems as well, and built two concept EV conversion battery systems (20 kWh and 40kWh) to test said systems on the road. One of the cars was in daily driving for years I think, giving a lot of data to look at at varying real-life conditions. I was doing some of this research for some time at a university (including curve testing, internal resistance testing, self discharge testing, capacity fade testing, and DCR rise testing); I had some very good chances to communicate with the chief chemist of a commercial li-ion manufacturer there. These kinds of contacts are really valuable when you need to deal with the massive flow of true and false information on the 'net - even in academic research.
Then I went freelancer to design another piece of cycle testing equipment. We did quite a few measurements for companies designing large li-ion equipment. The problem with the li-ion cells is: even if you are a customer buying in tens of millions, you are not big enough to get real data on the products. So you need to test. And to do that, you outsource it to a test lab. A test lab, then, has a $1000000 setup of MACCORs. Or, you can use some HobbyKing style battery testers. We tried a PowerLab - they are utter jokes and toys (as expected). So I designed some sanely working redistributive (energy-saving) "accurate enough" (to 0.5% typically) equipment which can automatically cycle, perform different pulsing for DCR measurements, inject specific AC current waveforms, measure AC resistance, log capacity, voltage, temperature... With configurable channels, we could run 20 different cells at +/- 20A each, or run fewer tests with paralleled channels. We had some 100A discharge tests going on some 6Ah cells for over a thousand of cycles. We used fridges and heatbeds to prove things the customers didn't want tested, but we knew how the chemistry works and needed to prove the point. And so on...
Anyways, to the results, and back to the question of do you need coulomb counting or is voltage measurement enough:
With LFP (lithium iron phosphate), I find coulomb counting is almost absolutely necessary, since the discharge curve is indeed flat. You are probably thinking about this specific chemistry. Which has become quite a niche IMHO. To be fair, some other chemistries are somewhat as flat as well - I have seen some fairly flat NMC (nickel - manganese - cobalt oxide) cells as well. Not as flat as typical LFP, but I'd expect these are difficult. But these are in minority.
In expensive, large pack EVs, I use coulomb counting as well, because 1) the relative cost isn't prohibitive; 2) and because users often find they need an exact, steadily dropping "km/miles range left" indication, and become panicked if the range jumps near the end; and, last but not least importantly: 3) these systems tend to be fully charged after almost each cycle, or almost every day, giving a reliable reset point for the integrator. This is really important! No one builds an everyday BMS with a 0.1% or 0.01% precision current sensing. These exist in said $1M MACCORs. +/-2% is reality, and this means it drifts below the accuracy of a simple voltage based system in just a few cycles if not reset. And, reseting is always based on voltage; the trick is it happens on well known conditions, such as 4.20V at C/20 charge rate.
But in simple, small gadgets, I always tend to lean towards voltage-based estimation; sometimes I'm so lazy I just do it as a linear approximation - not a single customer has complained about the nonlinearity of even these solutions! It runs faster when it's starting to get empty, and wiggles around with the load a bit. Depends on the exact specifics how usable it is, but your blanket statement it's not usable is clearly untrue. I find it usable more often than not.
Agreed, using a chemistry-specific lookup table provides much better results, and is utterly trivial to implement - download a curve of your cell of choice from lygte-info.dk who probably already tested it and you are done! Yes, the readout wiggless with changing load - but that's intuitive for the customer. It can be a good feature: what good it makes to have a steady 10% SoC value on screen, then the product suddenly dies on you when you press the pedal or push a button, with the load peak hitting LVC limit behing the scenes? With the voltage based estimate, you start seeing it wiggle alarmingly towards 0% momentarily, and you know it'll die anytime soon with the next peak; it doens't take a li-ion chemist to understand this UI behavior. Very intuitive, especially on a graphical bar scale. It's "good enough" out of the box, without complex algorithms or UI design.
Voltage based approximation has one very good trait: it's stateless - and it automatically tracks the remaining actual capacity with absolutely no algorithm! The accuracy is what it is (I say typically around +/-20% when people want to have a simple single number), but it works very robustly in cases where natural reset points for the integrator do not happen. It's an absolute classic to see a completely invalid battery level integration on a coulomb counting based system just after about ten cycles, this has always happened with laptop battery management, for example, and people have complained about it for ages. What's the value in the "+/-1% best case accuracy", if the worst case accuracy is +/- 50%, and the algorithm cannot tell you whether to trust it or not?
This being said, really well designed coulomb counters can, of course, fall back to the voltage mode, do some kalman filtering like correction all the time to the integrator value, and apply temperature and current compensations to get, say, +/-10% accuracy in voltage mode and +/-1% accuracy in the integration mode. While I have seen discussions of such algorithms, and given quite some "brain masturbation" on this idea while walking in the forest hunting (edible) mushrooms, I have never seen such system implemented in practice. Maybe some magical IC is already getting there? I don't know - I don't trust them after seeing so many total failures.
It's worth mentioning that coulomb counting the exact SoC, even if accurate, serves only one single purpose: the user experience. This info is typically not used for battery management purposes.
I did quite some work as a BMS failure analyst. Totally unplanned, it just happened that the failing systems were everywhere. Typically very complex, overengineered. Although the most typical failure mode is that they imbalance the pack and overdischarge a cell (or several cells), it's also very typical they implement broken-by-design coulomb counting algorithms.
Use what tools you need to use - but if you are unsure, and if you are not really up to the task, choose the simplest, and naturally most robust way to do that, if ever possible! A four-five-LED indication of the approximate voltage is really useful in real life. Something like that has always been used in approximate battery gauges, but it only works for alkaline cells - and li-ion, surprisingly!
-
In my experience, estimating a charge level on a Li-ion/LiPo battery based on battery voltage only is often pretty inaccurate and has many pitfalls. For one, it depends so much on the actual current draw profile that it's hard to use in many applications in which the current draw is pulsed and not at all constant, with periods of low load (maybe in the µA range) and peaks in the tens or hundreds of mA for instance. For a constant current load, that's already much more predictable, but a lot of applications are not a constant current load.
I now tend to use ICs that do a good job at this task ("battery fuel gauge") while using nicely developed algorithms based on voltage, current and accumulated energy, such as the MAX1704x series.
-
I now tend to use ICs that do a good job at this task ("battery fuel gauge") while using nicely developed algorithms based on voltage, current and accumulated energy, such as the MAX1704x series.
Thanks for pointing this out - if you read the datasheet, you'll see they point out the problems with coulomb counting.
Yes. This IS a voltage-based device! They don't measure the current. This is exactly what I have been talking about.
I wonder how you didn't notice you didn't connect any current sense leads anywhere while using this device? How do you think it measures current?
They show an example error analysis in one particular test case with -7%/+3% error over the SoC. This is fairly typical for a curve-compensated, temperature-compensated algorithm. It's probably way more accurate in worst case conditions (probably around +/- 15%) over the classical ones that use current sensing and coulomb counting (with underdeveloped algorithms) and can show any random value.
The best thing? I'm 99% sure their touted magical "sophisticated battery model" which "simulates the internal dynamics of a Li+ battery" is something simple and trivial ;). The 1k | 1u RC filter they suggest rules out most "AC trickery" as well. Since they don't measure the current, they can't compensate for it (directly).
Just filtering the shit out of the signal (for example, with cumulative moving average on the MCU, with a time constant in range of half a minute), you have a nice smooth number like in coulomb-counted systems. So you'd never know it's "just" voltage based!
OTOH, I would assume that for such 7% error, you would need to:
"To achieve optimum performance, the MAX17043/MAX17044 must be programmed with configuration data custom to the application. Contact the factory for details."
This would include the cell curve lookup table, I guess.
It might be easier to just upload the graph from lygte-info.dk to your MCU and do it yourself. Depends on how keen you are on developing your own I guess. The obvious plus side would be the BOM and complexity savings, since this IC doesn't provide any "analog" feature you need, if your MCU has an ADC, this is just some secret code running on a separate device; a way to license the algorithm would make so much more sense.
-
I mentioned the MAX series as an example and those are the ICs I currently use for this task (I like the elegance of MAXIM's approach and find them reasonably accurate). But I've also used other similar ICs that were based on coulomb counting only or a mix of current measurement and coulomb counting, and those were not that bad actually. Certainly much better than just predicting the SOC based on just the loaded cell voltage on a proportional scale as I've seen done a few times. Using a cell voltage measurement in addition to the estimated consumed charge helped getting more accurate predictions AFAIR, but that was pretty much calibrated on a given battery model.
If you think about it, even though MAXIM doesn't disclose their algorithm, current is still indirectly used IMO. For a given battery model, variations of the cell voltage on a short term will give you a good indication of the drawn current if you have enough resolution. They may not *directly* rely on current estimation, but I think it still gets its influence indirectly in their estimation.
As they state and as I remember from someone working at SAFT, the open circuit voltage of a Li-ion battery is actually a good indicator of its SOC. But in a real, always-on application, the OCV can't be directly measured (usually), so this is an indirect estimation based on the loaded cell voltage as far as I've gotten it, so I guess they most likely use "short"-term variations as well as the average value on a longer term.
-
To clarify: the battery gauge chips (like the ones from TI) obviously do more than just measuring charge (current going in/out) like measuring end-of-charge/discharge voltage, keep track of aging but the secret sauce is never revealed.
If you measure only the voltage then from what I've seen and read you'll have problems with varying loads and temperatures so if you are going to make a voltage based charge monitor you'll need tables with voltage versus temperature and current to determine the amount of charge going in/out. That seems more complex (or less accurate) than measuring the charge going in & out and adjust that with reading the voltage at points in the charge/discharge curve where it varies a lot.
@Siwastaja: it is difficult to asses where someone's knowledge comes from. You have obviously done your homework.
-
I have recently been looking at the Texas Instruments Impedance Track goodies.
They basically do a voltage lookup, where the LUT is chemistry-specific.
They estimate the open circuit voltage using measured voltage, current and temperature (several more lookups). To account for gariation and ageing, some parameters are learnt in situ; most chips include a Coulomb counter to help with that.
Currently having a go with a BQ40Z50-R2, which does 1 to 4 series cells, and includes protection, gauging and (low current) balancing. Optionally generates commands for a smart charger.
I hear a lot along the lines of “don’t use a BMS, they kill your batteries.” But you at least need voltage protection on a per-cell basis and current protection for the pack overall, so I think it really should be “don’t use a crap BMS.”
-
“don’t use a crap BMS.”
Well I just bought a cheap ass 10S Li Ion chinesium BMS as stated before. How can I test it if it is crap or not?
I imagine to use:
- 10 R ladder to simulate the 10 Li Ion cells voltages
- DC load to simulate battery when charging or load for battery in normal use
- one or two PSU to simulate charger or battery
By adding some R in parallel to the 10 R ladder I can simulate a low cell and see how the BMS behave... and so on.
Any other suggestions?
BTW: Thank you all, I am learning a LOT.
-
so I think it really should be “don’t use a crap BMS.”
You don't need to look too far back to see where this is coming from. Not even a decade ago, the idea of using a "good BMS" was mostly theoretical for most people: "crap BMS" was the standard. It was impossible to know for a product design engineer - and it still is quite difficult; even for li-ion experts.
Out of commercial li-ion BMS chips (mostly flooded to the market by TI), most were broken-by-design in a way or another. There still are many traps, especially by TI. I have evaluated several totally broken-by-design li-ion ICs by TI. Probably some are very good. But how can I trust them? Looking at TI's li-ion product datasheets, it's utterly clear these parts are designed by inexperienced engineers in a hurry, not using any kind of high-reliability design practices, to keep up with the management's desire to have the greatest lithium ion management IC portfolio (50 new chips out every year, even though the problem field is static); not in co-operation with battery chemistry experts, reliability experts or safety experts.
Out of commercial li-ion BMS modules, most were outright dangerous, or at least would kill the battery. Of course, they are using said IC's, or some are home-brew solutions.
Then you have the more expensive hi-tech li-ion management ICs which come with a ton of certifications and paperwork. I looked at one ADI part for example, with separate redundant analog comparator-based backup system for the cell voltage measurements, against ADC/MCU failures. Sounds great! Then, looking at the typical application in the appnote, both shutdown signal paths were brought out to one microcontroller which "decides" about the system shutdown. Every idiot on the planet can laugh at this ridiculousness. Yet, it's a really well certified high-reliability product for automotive.
Now, what comes to li-ion safety, people tend to make these completely wrong assumptions:
* Li-ion is super unsafe; it instantly blows up or catches fire if you overcharge it the slightest
* BMS is the magic sauce which prevents all those fires which would happen otherwise.
In reality:
* A poor quality no-name Alibaba li-ion cell may be fairly dangerous; otherwise, a modern, typical li-ion cells implement cell and chemisty level protections such as shutdown separators (that shut down the ionic transfer before the onset of thermal runaway), PTC resettable fuses, CIDs (interrupters based on cell pressure). I have tried my best to try to blow up a modern Samsung/Sony/Panasonic/LG cell. I haven't succeeded. I have applied 30V, 10A for 8 hours to a 4.2V cell. The cell becomes a self-regulating hysteretic heater that swiches on and off at about 120 degC. The plastic wrapping changes its tint. Worst case, I got a tiny leak of electrolyte out. These are BTW all tests that are specified by the manufacturer; they guarantee their cells pass such tests. External heat over the thermal runaway onset temperature - around 150 degC - would be the best bet, since the modern shutdown separators seem to work so well that even a nail penetration is not setting these things on fire anymore.
Warning: this is not to say you should abuse the cells in any way. They can catch fire because the inherent chemistry is still very volatile - it's just got safety layers built around it -, and abusing them will of course increase the risk as it puts more burden on the safety features that are not "normally" needed. It's just that it doesn't tend to usually happen, because the safety features are well designed.
* The cell-level, balancing BMS, when working properly, may extend the usable pack lifetime slightly. A non-working, destructive BMS often does not cause safety problems, because - see the previous point. The cells are usually OK with the abuse a faulty BMS gives them. So, a faulty BMS just destroys the pack, but very seldom in a dramatic or dangerous way - this is 100% thanks to the li-ion chemists and engineers!
So, all this is why I rolled my own BMS, not using any BMS IC, for EVs and energy storage (scalable, for large packs, up to 250s, up to around 100-200kWh; something similar to the modules available back then, such as Elithion, just a simplified, minimalistic design.) I did semi-commercialize it (producing a few full units, selling at very low price to selected customers). Even though I tried to address all issues seen in failing BMS's, I still don't have complete trust in my own, either.
Why don't I trust my design? Because I have seen very experienced professionals fail to provide the reliability. I have balancing! It can get stuck on, and although that doesn't cause fire - because I did thermal analysis for stuck-on balancing - it could still overdischarge a cell. I have a timeout feature (some TI products don't - they kill your battery automatically if your I2C communication fails stuck once in your product lifetime!) But still.
BMS design is non-trivial. The first issue you face is defining what you need to do, the basic functionality and specifications. This is hard due to information overflow. The focus is easily lost to difficult-to-implement but unimportant features, such as:
* high balancing currents
* redistributive balancing
* complex algorithms not based on actual battery science, but the "gut feeling" instead. For example, at lot of effort has gone into AC measurements and "state-of-health analysis" bullshit instead of just implementing reliable LVC and HVC basics.
Then, when it comes to implementation:
* A BMS needs permanent connections to dozens of cell taps, possibly over a decade. Powering any electronics continuously, so that it's guaranteed to work within tight specs for a decade is non-trivial.
* Leakage currents need to be kept to minimal levels, even in corner cases. Any kind of latch-up of increased leakage - such as an MCU or an ASIC FSM exiting sleep and staying awake, or getting stuck to measuring loop - is a catastrophe which automatically kills the battery.
* If high balancing currents are involved in a dissipative way, the power dissipation analysis cannot be done by the "typical" basis, assuming short duty cycle. Balancing resistor can get stuck on for several reasons. I remember at least one reported conversion EV fire that was likely to be caused by overheating balancing resistor.
To make the point: compare the MTBF for a "MCU or FSM gets stuck in a wrong state" event for general consumer (or even industrial or automotive!) electronics, and a BMS.
The general device:
* Runs maybe a few hours a day
* Has a typical lifetime of probably five years; after that, no one's interested.
* Resets every now and while, when power cycling
* May get stuck without causing much problems: the user just resets it and we are good to go again!
Think of any gadget, even well designed. Imagine that every time you need to boot it for any reason, it would die instead. That's the level of reliability we need to think about when designing a cell-level BMS, especially with balancing.
On a li-ion BMS, a full reset cannot be done. It's permantenly powered for a decade; often in a difficult (read automotive) environment. What's worst, such a failure event almost guarantees the self-destruction of the pack! If any part of the IC / MCU gets stuck, a reset cannot be done, power cycling cannot be done, it's all hardwired inside the enclosure. It looks dead, is nonresponding, and you just wait it to kill the cells with it.
The MTBF for the similar event should be at least 4-5 orders of magnitude longer than for general consumer electronics. And because no typical BMS designer - not even at TI, they are making low-cost product series - has access to some super high-reliability NASA space technology, what do you think? It's all based on lowest cost commercial off-the-shelf processes. Especially at TI.
Which is why the only way I could imagine increasing the reliability was to simplify, reduce part count and complexity. But this isn't going to make the 4-5 order of magnitude difference required. So, I'm not too happy in my design. One cell module has actually failed on the field (but, luckily, didn't kill the cell). I suspect ESD during manufacturing or calibration.
When I was hired in the university I talked above, they actually had this super expensive conversion EV with a super over-engineered BMS (with redistributive balancing and everything). The total BOM count for the 80-cell system was over 5000 components, tens of meters of wire, around 300 connectors... And the problem was, the BMS was in some peculiar "state", it had been for half a year at that point, didn't let the car boot, didn't enable the charger. Now, when they finally let us start dismantling the car, about 30% of the cells were already completely dead, discharged to 0V. Now, the only task why the BMS actually exists is to isolate the battery pack when any cell hits LVC or HVC, completely preventing overdischarge. This BMS failed exactly its primary purpose. To the designers, it clearly wasn't primary.
We got to see another similar EV case a year later, with a very different kind of BMS, and it had the exact same story: the BMS consisted of about ten 8-cell modules (so again around 80 cells total), and out of these ten modules, two were latched up, in a way that they had killed all 8 connected cells through the balancing taps. So, 16 cells were completely dead, 64 cells completely OK.
--
Really, the essence of a cell-level BMS is 90% cargo cult. You just design one in, now you have a BMS! And you don't need to think about it. Convenient, huh!
It's likely to be some random product from TI's massive lineup, most likely broken-by-design. I have chosen TI li-ion management part twice in my life in my own designs (I don't understand why, I usually learn from the mistakes of others), and regretted it twice, and redesigned it twice. It has wrong or non-optimal setpoints, it somehow does let the cells overdischarge, then does "preconditioning" at an order of magnitude higher current (I have seen C/20 in a TI product) than what's considered safe and instructed by battery manufacturers (typically C/100).
It claims to have "overvoltage protection", but when you look at the block diagram, you'll notice it's connected in the wrong place in hurry by the designers, so it has no chance of protecting anything. It's next to impossible to find all these traps beforehand.
Oh well, I had a pack charge to 4.63V/cell on a very simplistic prototype (with no secondary, redundant protection) by a TI part which was fully functional and could have just shut down the MOSFET it was actively driving "on" - from charger input to the battery - despite the internally nonconnected "overvoltage" signal screaming out of their lungs. But it worked perfectly! My lazily done linear voltage-based battery gauge showed 127% and everybody was happy, because the extra charge was there, and really extended the runtime ;D. No fire. Thanks Samsung for a great product. No thanks go to TI.
The Typical TI BMS mostly works by luck, it might kill a small percentage of products after some years, but not too many - and no one even thinks about the cause. They think: "oh, the batteries are just unreliable, thank God we have a BMS, without it we would be seeing higher failure rates I'm sure!" Then, in some cases, the BMS causes some theoretically dangerous error, such as lets the cells overcharge - but, thanks to the robustness of the modern cells, nothing dramatic happens. So, everybody's happy, products are reliable enough for most people, and the BMS checkbox is ticked!
But you at least need voltage protection on a per-cell basis and
This is a very interesting myth I see recurring - if I had a dollar every time....
It has some basis on it, but it certainly isn't a general and "hard" rule like people think.
Not using cell-level voltage measurements for cutoffs is not only limited by cheap Aliexpress specials.
Since you seem to know this, could you explain to me why Robert Bosch does not need "per-cell basis" voltage protection? I mean, they are fairly reputable I think?
Could you explain the huge number of li-ion charge management ICs, that are supposed to be used with two cells connected in series, without center tap monitoring, as a single 7.2V nominal cell?
There is no debate about connecting 2 cells in series without cell-level cutoffs. That's the absolute industry standard practice, has been since day 1. The only debate about this is by hobbyists, on forums.
On over 2 cells, or large packs, things start get more complicated, like always, getting us to the "it depends" territory. But Robert Bosch isn't the only reputable manufacturer who has no issues going up to 6s without cell-level anything. There are some industry design traditions: laptops always have cell-level monitoring and balancing (and "killed by BMS" packs were fairly typical at one point about a decade ago - I have dissected several) - power tools often won't (and I haven't seen a single incident of a imbalanced pack, or a cell at 0V).
current protection for the pack overall,
Safety-wise, the most reliable current protection is a passive fuse, properly sized (not massively oversized). Don't ever forget this back up in case your MOSFET switches fail short. Remember to look at fuse DC ratings and DC breaking currents.
Sorry for getting so verbose. Hope this all helps someone.
-
Well I just bought a cheap ass 10S Li Ion chinesium BMS as stated before. How can I test it if it is crap or not?
There is no way to test that reliably, easily. Really useful testing takes a lot of time and resources.
OTOH, quick, basic testing gives you a initial go/no-go quickly. For this,
Any other suggestions?
I suggest using in actual environment, in the bike, and carrying a multimeter with you to do some verifications. Especially look at the signals when charging. You can always set the charger CV setpoint to slightly higher than normal, during controlled testing, to see if the BMS can turn the charger off on HVC event. Then, bring the CV setting back so now you have a layer of safety - if the BMS fails, the charger still limits the voltage like it would in a non-BMS system. The only way the BMS can now fuck this up is by stuck-on balancer causing a massive imbalance.
The effect of LVC, you can verify while driving the pack empty. At some point, it just suddenly stops. If it's getting sluggish instead, take your multimeter and verify the cells are still in balance, even on the bottom. With modern high-quality cells, capacity matching is good enough that cells tend to be fairly well in balance on both top and bottom. And if the cells are in balance on bottom, the risk of low-voltage damaging them by driving is small, even if the cell-level LVC fails to work - it will get so sluggish that you just stop. So better watch the balance when near empty, that will give you the warning sign of which cells to watch more closely.
But at some point, it's just OK to follow the cargo cult and not worry too much. If the cells are of proper quality, the risks are fairly small.
The Chinese BMS design is probably not any worse, and is likely to be much better than a typical over-engineered Western amazingly novel super BMS.
-
Siwastaja
Once I started I could not stop reading all of it, I was addicted to it. I thank you so much.
You not only inspired me to go deeper in my chinese dead horse bike project, but also now I fell like one day I will/can/should develop a Li Ion energy storage pack for my future home.
There are so many questions and comments I want to write now, but I prefer to digest it first, do some experiments and report back.
I will ask Dave to create a Battery section in the forum. Probably better not to have it, most of the know how should be in the section "Renewable Energy"
Renewable Energy: Solar, wind, thermal, nuclear, energy storage, Electric Vehicles etc.
Ops I should have posted this there....
-
@Siwastaja: I'm wondering: is any of your BMS designs used in large production runs of battery packs? From what I've seen the TI chips are pretty common to use as an analog front-end for a microcontroller. If the chips from TI where as bad as you describe nobody would use them.
-
@Siwastaja: I'm wondering: is any of your BMS designs used in large production runs of battery packs? From what I've seen the TI chips are pretty common to use as an analog front-end for a microcontroller. If the chips from TI where as bad as you describe nobody would use them.
Oh, the classic "million flies" argument :)! OK, thanks for your interest really, let me explain my point of view a bit more.
I never dared to commercialize the "full home brew" BMS in large scale, as I explained above, so I kept it small scale. In total, I had 500 and later another 500 cell modules made. I still have some. I want to be careful and avoid making the same mistakes others did. And while my design reduced the complexity and BOM cost by about 3-10x compared to the competitors, this doesn't prove ultimate reliability.
So, if my manifesto says that the existing li-ion products are too unreliable, I would need to prove A) that they really are; and B) that mine are considerably better, and proving or disproving reliability takes a lot of time and resources! I would also be fighting against industry giants, with hard-to-really-prove claims. Not proving every claim may be OK on forum discussions (no one expects it from themselves, only from the others who disagree), but for really professional work on safety and reliability, this marketing speech right here wouldn't cut it.
And I'm not actually a reliablity-nut at all. I'm mostly just fine with the reliability of the TI chip. I'm also fine with an occasional lithium ion fire happening, if the thing is sorted out properly, root cause investigated and fixed.
What I'm not fine with is all the hypocrisy and lack of information around these solutions - let alone the absolute classic: the false sense of security for those who are more concerned about the safety than I am. What I'm also not fine with is the BMS cargo cult, which manifests itself very well in the way how BMS manufacturers need to document in big red letters that yes, you indeed need to connect your BMS shutdown signals to actually shut something down. (For example, see the very first words of Elithion manual: http://lithiumate.elithion.com/php/index.php (http://lithiumate.elithion.com/php/index.php) ) This usage pattern which really exists is not too far from using a wooden BMS unit with bamboo wires that looks real. Even on these forums, I have heard this argument multiple times:
"BMS is absolutely critical for safety".
Then, when questioned, they go on:
"Geez, just put in some BMS chip and you are done!"
They put quite a lot of trust on that random, unnamed chip, and quite a lot of trust on the application engineer deploying it properly, for such "safety critical" thing!
BTW, AFAIK, and correct me if I'm wrong, but most of the "TI chips" and alike do not have any kind of safety qualifications done (they wouldn't probably pass), and if the product is in relevant group so that you need the safety stuff done, you as a system integrator take the full risk, and you need to understand the big picture completely, all the small details included.
Only the most expensive chips very few people end up using have the paperwork done (which doesn't prove much, as explained earlier).
. . .
So I went on to make case-by-case custom solutions where I need li-ion batteries as a part of something larger, because I like large problems, often done in "good enough" manner. I always try to prefer a single-cell solution or max 2s-3s (without cell-level management) and step up the voltage, if possible, and "manage" the cell by the MCU that sits there anyway. If necessary, I employ completely redundant analog backups with comparators and voltage references. Completely redundant also means redundant power switch transistors. Makes me sleep better. Yeah, I don't trust TI - too amateurish, too many products in too quick cycles, with too little attention to details that matter. Also, tend to need some babysitting.
Yes, I have needed to make a small-scale recall and bodge fix some devices because of a compounded failure of:
1) Me failing SOA calculations in hurry, so that one level of security is wiped out by a shorted MOSFET,
2) TI failing their li-ion chip design, probably in hurry as well, so that the protection which should exist doesn't, and they happily overcharge the cell by actively driving current to it while their internal non-connected signal tells to stop.
This didn't have cell-level measurement - which wouldn't have helped, since the controller was screaming "overvoltage" anyway, cells were in perfect balance; the signal was just ignored. It's the #1 n00b mistake as explained in the Elithion manual, but when it's inside a TI's chip, you can't do anything but add your own external protection. Did I say something about false sense of security? Or babysitting these chips?
--
For your questioning of "nobody would use them" -- it's as I explained before; the chips I'm complaining about are probably "good enough" so they they don't fail all the time, or in much bigger numbers than the devices they are designed in would fail anyway, and when they fail, it's not catastrophic, because the cells handle the abuse because the li-ion R&D business - think about Sony, Panasonic, Samsung, etc. have been really responsible. It's the high tech I'd be proud of, but which gets little credit - only the extremely rare li-ion "explosions" are reported on media!
Now, the funny thing is, when a BMS fails, the whole product fails. When it's (hopefully) examined by the designer for forensics, who is not a li-ion BMS expert, but "just designed in a nice & easy chip", the first thing they do, they measure the cell voltages. A cell is at 0V - so must have been a bad cell! Probably a wrong conclusion. In fact, it was a failed BMS, which killed the cell. The reason gets classified wrong. I'm 99% positive that TI BMS chips fail at 10x or 100x the rate compared to failures originating from the cell. Even 1000x wouldn't surprise me - cell failures are such rare incidences. If cells would randomly short out or start leaking, massive paralleled packs of 18650's, widely used not only by TESLA, would show massive numbers of problems.
The safety is built in the cells. If the cells were really dangerous as people expect, the game would be totally different, and we would have very reliable BMS solutions available out of necessity.
Please understand that TI's just a typical representative example. Others are similar. TI is most widely used and seems to have the largest portfolio. And I think they can handle my critique >:D. And maybe I'm still a bit angry about my 4.63V cells and their partial responsibility about it, and want to vent off?
My point? It isn't to claim these products are total unreliable crap which automatically blow up your battery pack instantly.
It is, people tend to represent these BMS products as completely irreplaceable and extremely robust, extremely well designed safety devices. This is really not the case. Many proper battery systems, typically less than 6s, do not use any cell-level BMS at all.
Yes, I have designed in a TI li-ion I2C MCU AFE once - maybe I was eating the wrong mushrooms while doing that decision? I didn't even go on testing it after getting the prototypes, but desoldered it from the first prototype. It has an I2C interface through which you turn balancing resistors on, then use another message to turn them off. There is no timeout. If this message never comes through - for example, due to I2C bus lockup, which is surprisingly common - you probably have a bricked product. Yeah, implement all the watchdogs. Just to be sure, inject the "typical" I2C reset pattern of clock transitions. How do you test it is effective in actual lockup condition? Any such event during the whole product lifetime could brick it for good. MTBF must be extremely high for no field returns. Really doing this properly on the MCU is a lot of work. So it's not a properly integrated solution - you need considerable work to babysit it. If they just had implemented an utterly trivial timeout counter... But no, every integrated solution seems to have at least one such showstopper-class deficiency.
This feature alone prevents me from using said product. These products are designed for low-quality cheap crap. The performance is similar to what I'd expect from a $0.02 Shenzen special available at LSCS, meant to be used in a toy. What I don't like is the mental image we are given.
But dead products happen! Some percentage of field returns is usually accepted on consumer electronics. In industrial, having a maintenance agreement, and having things break down every now and then (but not too often) can be a good milking cow, depends on how you play the game.
I'm also not proposing any simple answer on "what you should use". Sadly, I'd like to, but I don't know. I don't have a really reliable, good BMS system in my mind. Mine is probably not such; nor is TI's. I revisited the COTS BMS chip offerings this year, spending full working week (60 hours) going through all available BMS products for a 6s battery. None of them made me feel confident.
I expect and hope the safety in battery technology itself to continue increasing so that the BMS would become even less critical. Now, it's already noncritical enough so that the market doesn't need to produce really safe and reliable BMS products.
People want easy answers and quick solutions. Application notes and application field engineers exist for this purpose. This time, I can't give any easy answer - I haven't found it myself. Sorry for that.
TLDR: People just use a TI chip since that's what they are recommended to do. People don't know the failure modes of the batteries. People are happy if a typical product last a typical average lifetime, and returns are within some typical levels.
-
Sorry for being a bit obnoxious but I'm getting a bit wary when people say 'the whole world has it the wrong way around'. You do make good points. One of the reasons my customer produces battery packs in the NL is because there are almost no decent Li-ion packs available on the market. Most of what comes out of China is too crappy (unreliable) for high-end commercial / industrial use. They do quite a bit of research themselves as well.
When it comes to safety the BMSes I have been dealing with (for high volume production battery packs) ultimately have a fuse which interrupts the circuit when the battery gets shorted and the BMS doesn't cut the power. Other than that there is a big reliance on software and cleverly designed hardware avoiding single points of failure (can't explain further due to confidentiality) where it comes to protection. Other than that battery packs are also required to pass CE and UN38.3 testing. The UN38.3 safety testing is particulary interesting because these involve testing multiple packs for vibration, temperature, charging, discharging. If a pack survives the UN38.3 torture testing you have some degree of a assurance the battery pack won't fall apart or catch fire by itself.
-
Sorry for being a bit obnoxious but I'm getting a bit wary when people say 'the whole world has it the wrong way around'.
I agree; OTOH I'm not saying "the whole world has it the wrong way around". It's not that black and white. I'd say, the whole world has it in a non-optimal way, and people design things without thinking. Which I think you'd agree with. It's not limited to li-ion battery management. People do the separate analog and digital ground planes as well as taught by application notes, without giving it a thought. Now, there is one difference: no one is saying that your things catches fire and explodes by using the wrong kind of ground plane. This kind of simplified argumentation is true when discussing about li-ion management.
there are almost no decent Li-ion packs available on the market. Most of what comes out of China is too crappy (unreliable) for high-end commercial / industrial use.
We noticed exactly the same. Spawning our own cell testing system to verify/prove/disprove assumptions and, foremost, evaluate real-life cycle lifes in different conditions. Trying to find "optimum way", I built this in 2014-2015: https://www.youtube.com/watch?v=tpNfA9SBEi4 (https://www.youtube.com/watch?v=tpNfA9SBEi4) . It's still in use, I now use it for building battery packs for mobile robots in a related startup I now design for... Not using nickel strip but direct copper interfaces is both cost and performance optimization.
When it comes to safety the BMSes I have been dealing with (for high volume production battery packs) ultimately have a fuse which interrupts the circuit when the battery gets shorted and the BMS doesn't cut the power.
Yes. The bog standard fuse. Rated correctly, it's the most important protection. (Note that some BMS's have some peculiar limitations when adding fuses or contactors mid-pack. Always be sure to understand these limitations.)
-
I built this in 2014-2015
That's impressive, how many cells in parallel? Wow...
-
That's impressive, how many cells in parallel? Wow...
That one in the video, IIRC it was a 7s46p pack, a 3.2kWh module. Twelve such modules total 39 kWh.
-
Let's see if I can do some homework.
7s46p= 322 cells
Nominal voltage per module = 3,6V*46 = 165,6V
Charged voltage per module = 4,2V*46 = 193,3V
Low voltage per module = 3,3V*46 = 151,8V
Nominal Capacity per module = 2,78 * 7 = 19,46 Ah
3,2Kwh / 322 = about 10Wh/cell
10Wh / 3,6V = about 2,78 Ah
EDIT wrong of course
Massive, I would be scared to touch that beast... I am managing that 540Wh bike battery with carefull triple checked slowly baby steps, I can't imagine the safety you are dealing with those monsters!
Respect.
Good bathroom reading:
https://batteryuniversity.com/learn/article/safety_concerns_with_li_ion (https://batteryuniversity.com/learn/article/safety_concerns_with_li_ion)
PS: Starting to design a storage engergy for my home in my spare time... let's see. If I will not burn that bike I will not burn my home neither. :popcorn:
-
7s46p= 322 cells
Nominal voltage per module = 3,6V*46 = 165,6V
Nominal Capacity per module = 2,78 * 7 = 19,46 Ah
Other way around. 46 in parallel, 7 in series. So only 25.2V nominal. But put 12 of such modules in series, and it's 300V.
3,2Kwh / 322 = about 10W/cell
10W / 3,6V = about 2,78 Ah
Wh, not W :).
Yeah, Samsung INR18650-29E, nominal capacity 2.85Ah, nominal energy capacity 10.3Wh. This was and still is quite a good cell. At 220Wh/kg, it's not the newest or most energy-dense hi-tech anymore, but still available widely for a good price. Best commercial cells are around 280Wh/kg now. The trick is how to construct the cells into modules with as little weight, size and especially price overhead as possible. BMS in a part of this equation. Over a decade ago, it was OK to be expensive when the cells were even more expensive. This has changed. Cells for a 30kWh pack, enough for a small passenger EV, would only cost around $4000-5000 for a manufacturer, I guesstimate. For Tesla, probably even less! A traditionally overdesigned BMS would easily cost some $300-500 on the top of it. The EV battery system design really being a cost-limited process now, this money is directly away from putting more cells in (say, about 2kWh extra for saving $400 by using minimalized BMS). This would translate to 10km extra range for an EV, something that isn't meaningless!
The same is true for grid/home/solar/etc. energy storage, except it's even more cost-driven, because the weight and size factors are less important than in a car. And, every customer wants as much capacity as possibly, limited only or mostly by their budget. The question always asked is, how much storage do I get with $100, or with $10k, or with $1M.
Cell prices being what they are, that means you optimize on labor (and material) to assemble them into packs, and this includes the BMS cost (BOM, installation, and service). While, of course, not compromising safety or other pack-level features the customer needs.
Now, the raw cells costing at bit over $200/kWh for us small players, Chinese packs that use these cells cost around $400/kWh, and Western packs with the exact same cells start from about $1000/kWh, if not more, and even then, are quite some special snowflakes. Or as nctnico said: there are almost no decent li-ion packs available on the market!
Massive, I would be scared to touch that beast... I am managing that 540Wh bike battery with carefull triple checked slowly baby steps, I can't imagine the safety you are dealing with those monsters!
Welding the first side is easy-peasy since you won't short anything... But after turning the module around, you need to be very careful when aligning and installing the copper sheets, and have some measures against the welder robot hitting between two adjacent copper plates by mistake, shorting them out. I use plastic covers so that one segment is accessible at once. Adds a bit of manual work to the welding, however, moving the plastic cover 6 times during the process. After this, the modules use fixed plastic covers so that only the ends are exposed. A good physical design doesn't let the copper sheet ends touch each other. In these modules, they were at the opposite ends, and opposite sides as well. Cardboard covers tightly taped over the copper contacts until the modules are ready to be installed are still paramount; someone could still put the module on a metallic table!
7s is enough voltage to cause some serious arcing. Accidentally shorting 1 cell is not such a big deal (of course, the chances are, you are doing some hidden damage to the cells with this act. If nothing else, if the short is long enough so the PTC trips in the cells, it will have slightly higher DC resistance afterwards.)
I do have a large tub full of water* to push the full battery into in case of a fire (and a big garage door right next to this, so it can be pushed outside). Although, such an incident is very unlikely, but being prepared is still a must. We have tried to induce something like that small scale, with no success. In addition to basic overcharging (30V) and short circuit tests, we have tested (both on purpose, and by accident) what happens when you accidentally apply approx. 100 times of too much TIG welding energy to the cell, so it zaps through the cell case, causing a massive hole directly to the insides and charring the electrode roll in the process. No fire, no explosion, no smoke. Li-ion cells have quite some advanced separator materials (and possibly other tricks) nowadays. (But don't count on these features; if you do, you lose one important safety layer! Remember that the underlying chemistry is very volatile and dangerous, and the advanced safety mechanisms are not supposed to be put "on test" in normal operation / unintentionally. They still need to produce these safety features as cheaply as possible!)
*) Note that despite some "Battery University" style myths, water - a lot of it, quickly, everywhere, submerged - is the preferred way to deal with li-ion fires. There is no metallic lithium present; or anything else which would react violently with water. Water has the greatest cooling effect, and has the best chances of removing energy quickly enough to prevent or stop thermal runaway.)
Working with large packs really requires displicine, lack of disturbances (put your phone away, don't have chatty coworkers, don't "show off" your lab), and short working terms (preferably no more than half an hour of mechanical, repetative work at once). Add insulating tape or heat shrink tubing to all metallic tools. Use insulating temporary covers everywhere - a standard bath towel is great if you have random shapes with a lot of exposed contacts everywhere. When done, be a bit too OCD and perfectionist on adding different types of tape, glue, plastic covers etc.
I often add pieces of both Kapton and then fiberglass tape on 18650 positive ends so that any sharp edge won't cut through the thin insulation the 18650 cell comes with - especially at the edges and corners where the copper edge resides. Such "tape donuts" are used by some laptop battery pack manufacturers (but not all!) as well, since some incidents of shorted cells have been reported. IMHO this is an issue which should be solved by the cell manufacturers, but it isn't. Oh well...
-
I built this in 2014-2015: https://www.youtube.com/watch?v=tpNfA9SBEi4 (https://www.youtube.com/watch?v=tpNfA9SBEi4) . It's still in use, I now use it for building battery packs for mobile robots in a related startup I now design for... Not using nickel strip but direct copper interfaces is both cost and performance optimization.
It is not clear from the video but I don't see a slot in the copper plate to force the weld current through the top of the battery. Nickel strips always have these. When welding batteries you are doing a series spot weld which makes two weld in one go. Without the slot the current can go directly from one electrode to the other electrode of the welding machine making the welds vary in quality. I'm also not sure whether welding copper to nickel is a very good idea due to the metals being different.
-
It is not clear from the video but I don't see a slot in the copper plate to force the weld current through the top of the battery. Nickel strips always have these. When welding batteries you are doing a series spot weld which makes two weld in one go. Without the slot the current can go directly from one electrode to the other electrode of the welding machine making the welds vary in quality. I'm also not sure whether welding copper to nickel is a very good idea due to the metals being different.
This is correct. Recently I worked on a 10S li-on battery pack in an e-mobility situation and these are some of the standard design practices. I worked with the two big German firms in this field.
Regarding bms the TI BQ763x is widely used and easy to implement. A main fuse is always used for a worst case scenario.
-
Thanks Siwastaja for correcting me. I smoked something more dangerous than a lithium cell before doing the math.
BTW what do you think about lifepo4 battery for energy storage? I was doing some goggle on that topic and that chemistry type was popping out quite often...
-
So I proceeded to take the unknow BMS out, first I took out the 30A fuse screws..
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546635;image)
then I removed the two T-couple from the potting
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546641;image)
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546647;image)
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546653;image)
and voila done after disconnecting the cells voltages monitor connector...
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546659;image)
I tried to resurrect the BMW with a power cycle...
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546665;image)
no no no, that guy did not respond at all. Probably design meh... :horse:
This is what I am talking about sizes old vs new... you can just imagine the weight difference.
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546671;image)
Oh wow I can chose between the glass stock fuse 30A F30A, or that automotive (?) flat 30A 58V. I will go probably with the glass one... everything is already there....
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546677;image)
Now the hard part, it's a mechanical problem. The battery is almost 8mm too high.... I will think about a nice solution.
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546683;image)
This is the battery bike attachment..
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=546698;image)
-
It is not clear from the video but I don't see a slot in the copper plate to force the weld current through the top of the battery. Nickel strips always have these. When welding batteries you are doing a series spot weld which makes two weld in one go. Without the slot the current can go directly from one electrode to the other electrode of the welding machine making the welds vary in quality. I'm also not sure whether welding copper to nickel is a very good idea due to the metals being different.
Again, you are making the wrong initial assumptions :). You clearly see a system which, according to your knowledge about spot welding, is impossible, but instead of questioning your world view, you question the system, even though you clearly can see it doing the impossible.
What you know as the only "spot welder" is actually just a resistive spot welder, which is based on heating up the material-to-be-welded by running current through it by using two contacts.
Now back to your initial assumption:
This is not a resistive spot welder at all!
If your assumption was right, you would be spot on - it wouldn't work like that! Resistive spot welding would be nearly impossible to achieve with copper strip, anyway - nickel is used exactly because it has more electrical resistance, and lower thermal conduction, allowing enough local resistive heating compared to the exiting heat flow, making the weld possible. Still, slots are very important to shape the current.
I wanted to question all that complexity (needing a more expensive, inferior material (nickel), and requiring it to be die-cut to exact shapes). So, looking at the market, I saw you can weld copper directly to the cells, with some very specific high-end tools. This is called "micro TIG" or "micro arc" welder in the industry. Sunstone makes some battery CNC solutions capable of this:
https://sunstonewelders.com/product/250i2-ev-cnc-battery-welding-system/
Yes, actual battery manufacturers are using these. If you are only seeing nickel being "traditionally" spot welded to the battery, you are not looking around properly. Modern ways to do the same without nickel and without cutouts is normal business practice as of now, but it's still considered novel high-tech. Manufacturers show off their direct-copper welding robots at every possible e-mobility expo. My system is just a crappy DIY attempt for the same, but it works nevertheless.
The cheapest quotes I was able to get started from $30k (for non-CNC tools), which is why I made my own.
It's basically a well tuned el-cheapo TIG with a custom pulsing add-on. The electrode has no physical contact to the workpiece; it's about a millimeter above. HF strike starts the arc, and it's the arc making a local melted pool of the copper, welding it to the cell directly. This was nontrivial to get right, the electrode geometry is important; if it goes wrong, it just burns through the copper, leaving a hole and not touching the cell much. I have a custom ceramic piece I made on a lathe from machineable ceramic. It needs to hold the tungsten electrode correctly centered and offset, and implement small channels for the Argon gas. Spring-loaded copper ring around the head is for "grounding" (actually +) the workpiece.
This makes one spot weld (not two) at once, the size is about 1.5mm. They are really strong, I quality control it by making dummy welds to scrap cells and tearing them apart. While strong as is, to prevent rotational forces from tearing that weld spot (and to add redundancy), my software does multiple spots. Higher welding energy is used on the plus side of the cell, since it has thicker tab material. Nowadays I do three small-energy dots per minus side, two higher-energy dots per plus side. In the beginning, I did more spots for added redundancy.
I haven't actually measured the welding time, except by looking at video still frames, it's below 30ms. If you touch the resulting weld of three dots a second after welding, you'll feel that the cell and copper stay cold.
Ultrasonic welding is an another approach for the same end result - spot welding copper directly to the cell.
Internally, the cell has aluminium and copper electrode sheets. These are ultrasonic welded to the cell case, internally. I'd expect the same metal choices work outside the cells, as well. And I don't see anyone reporting metallurgic issues with direct copper usage using the commercial micro-arc or ultrasonic technologies.
What I do think about is that I'd like like to still have some cutouts, not for making the weld possible, but for strain relief due to possibly dissimilar thermal expansion within the system when finally installed in varying conditions.
-
BTW what do you think about lifepo4 battery for energy storage? I was doing some goggle on that topic and that chemistry type was popping out quite often...
Since you asked - IMHO, LFP serves quite a niche purpose. It's best for replacing 12V lead acid system, as it's the only li-ion chemistry that happens to have compatible voltage range so that a 4s pack can almost directly replace a 12V lead pack. For other li-ion chemistries, 3½ cells would be required :). The voltage curve is more flat, too. It's really close to lead acid. It's case-by-case whether it needs some tweak in the product voltage setpoints, or some management, but chances are it is completely a drop-in, even without a BMS, and people do that anyway.
LFP was a really biggie 15 years ago, academically, and for small battery startups, which picked it up for manufacturability reasons AFAIK. Possibly some patent reasons as well, I'm not sure.
LFP was touted as the next big thing. Back then, the only commercial li-ion chemistry was LCO (originally commercialized by Sony), with energy density around 160Wh/kg back then. LFP was supposedly 130Wh/kg, a fair compromise, with supposedly radically lower manufacturing costs (due to abundance of iron and phoshpor, compared to the price of cobalt!); and supposedly radically better safety.
Safety-wise, the thermal runaway onset temperature of LFP is somewhere well over 300 degC (IIRC) compared to the frightening ~150 degC of LCO, and even then, the thermal energy release in the runaway is more benign. But, this ends up as a fallacy; the batteries are complete products, and the safety is the sum of all chemical and physical design within the cell. Even with the imminent danger of the LCO cathode material, the bare cathode chemistry is just one thing. LFP cathode is still not completely safe, and can run away thermally, producing nasty amount of energy release. The electrolyte is still the same, flammable liquid, which shoots burning out of the cell due to the internal pressure, because no one has come up with a better non-flammable electrolyte.
And then, it comes to R&D and engineering:
Because the LCO, and upcoming LMO, NCA, NMC, were more marketable with their higher energy density, they received the actual R&D budget, going through safety improvements, such as advanced shutdown separators (meaning the plastic or ceramic layer in the cell melts "shut" and works as an insulator, stopping the ion transfer, in overheating parts of the cell), or physical cell design things, such as embedded fuses, current-interrupting rupture valves...
As a result, what do we have now, available on the real market, for putting into real products?
Very few LFP products. I have seen numerous safety tests where A123 and K2 cells failed more dramatically than our contemporary high-energy-density cells. Why? They are made by small players, with limited resources in safety engineering. They believe they have chosen a "safe" chemistry, giving the classical "false sense of security". Or, they are some Chinese players (like the absolute classic Winston Chung) not too interested about the actual safety. Happens in China, not saying there isn't good engineering there as well. And maybe they are right, maybe we are too fixated on safety here?
And, in the end, what do we have? We have:
* LFP cells are still at around 130Wh/kg (actually many Chinese LFP plastic boxes at below 100Wh/kg), which cost around $300-$400/kWh,
while rest of the world has gone forward, and so,
* the modern NCA/NMC cells are at around 250Wh/kg, and cost around $200-$300/kWh!
Especially for mobile anything, this energy density difference is baffling. It makes a real difference whether an EV can drive for 150km or 300km on a single charge! Or, if you can play your "sponsored by NSA" Candy Crunch whatever app for baffling 2 hours straight instead of just one!
What's left after this, are fairly empty promises that an LFP cell lasts for 2000-3000 full cycles while an NCA cell lasts for only 500 full cycles. The point is moot if the NCA cell can be derated to, say, 70% capacity for the same price (yet much lighter weight), increasing the cycle rating manyfolds, or who cares about a promise of 2000-3000 cycle promise if the manufacturer is either just on the brink of bankrupt, or a Shenzen special? Who knows all the failure modes and aging modes for those cells without extensive testing? I did quite a lot of such testing and found out that:
1) The reason the Samsung NCA cell is "only" specified for 500 cycles is that they actually test them, guarantee them, and add a generous safety margin. They actually tend to last approx. 1000 cycles on their own conditions,
2) Number 1 way to increase cycle life in NCA cells is to reduce charging current near full state-of-charge. That's where the cycling damage occurs. Want to fast charge at 1C? Do it, but taper it off after 4.0V.
We saw the same discussion, only on stereoids, with lithium titanate cells, which is even inferior per energy density and price, but supposedly even better per safety and storage and cycle life. Charge from zero to full in just 10 minutes! OK, true, but... what do you do with this rating, when you can get, with the same money, and with the same weight, and NCA pack which charges equal amount of energy in the same 10 minutes, but then still has 5 times more capacity left you can still go on charging! Or, who cares with claims of 10000 cycle lifetime, if you need to do 5 times more cycles because of the miniscule capacity of the pack, and after just 5000 cycles, it's already swelling and leaking electrolyte despite manufacturer promises (disclaimer: this last part is industry hearsay, but it's better than internet forum hearsay.)
So yeah, I don't see use for LFP anywhere except 12V replacement, but YMMV, and maybe I'm not 100% correct in all this. I'll change my opinion for energy storage as soon someone starts making LFP cells for considerably lower price (per kWh) than NCA right now. That would require over 50% price drop, however; I'm not holding my breath. OTOH, the market right now is price-fixed by the Chinese manufacturers. Price fixing is not dictated by laws of physics, so it can suddenly stop, given right conditions.
-
External heat over the thermal runaway onset temperature - around 150 degC - would be the best bet, since the modern shutdown separators seem to work so well that even a nail penetration is not setting these things on fire anymore.
Warning: this is not to say you should abuse the cells in any way. They can catch fire because the inherent chemistry is still very volatile - it's just got safety layers built around it -, and abusing them will of course increase the risk as it puts more burden on the safety features that are not "normally" needed. It's just that it doesn't tend to usually happen, because the safety features are well designed.
Thanks for your efforts to describe issues with LiPO's BMS etc, most interesting, but i wonder when in time did
LI cells get this protection from nail penetration ,overheat regulation , shutdown separation, etc? Just recently? is there
a definitive date to it so one can distinguish old from new tech?
-
Thanks for your efforts to describe issues with LiPO's BMS etc, most interesting, but i wonder when in time did
LI cells get this protection from nail penetration ,overheat regulation , shutdown separation, etc? Just recently? is there
a definitive date to it so one can distinguish old from new tech?
There is no definite date, it has gradually got better over the whole three decades of commercial li-ion history. Cells from the Big Guys (Sony, Panasonic, Sanyo; later Samsung SDI, LG Chem...) have always been fairly safe, except for small issues every now and then, which then drive the safety culture forward. I think everybody's had their share of issues as well. Most of us still remember the Samsung smartphone fires around a year ago? (It was two separate incidents with two different cells, with two completely different failure modes; only the latter being a Samsung SDI cell IIRC.) Sony had laptop battery fires in early 2000's as well.
Using any currently available cells from the big, trusted brands should be OK, even if you find some recent-ish new old stock. AFAIK, there are no big safety breakthroughs in production cells during the last decade, just some gradual improvement. I'm sure shutdown separators, PTC endcaps and CID rupture valves have been standard parts of proper 18650 cells for over a decade.
While the new (current) NCA and NMC cathodes are arguably safer than the "old" LCO (still available commercially, but getting niche), the difference is fairly small. I think it's something like a 10 degree C difference on the thermal runaway onset temperature, and some tens of percents less energy in said incident... The cathode safety order would be something like (from the worst to the best), 4.35V LCO, 4.20V LCO, NCA, NMC, LMO, LFP, LTO.
I tend to trust 18650 cells more than pouch cells.
Nail penetration and crushing can never be guaranteed 100% safe given the current technology, even though they are tested and typically don't result in a complete thermal runaway.
We all would like to see some real safety breakthroughs, for example, a cathode that doesn't run away thermally at all, or nonflammable electrolytes, but AFAIK they are not in the horizon. Remember that 99.99% of the battery science "breakthroughs" you read about in the media (traditional or even specialized tech media), are either complete scams, or massive exaggerations trying to lure in investor money. But, real 0.01% breakthroughs do happen given enough time and resources.
-
Nail penetration and crushing can never be guaranteed 100% safe given the current technology, even though they are tested and typically don't result in a complete thermal runaway.
We all would like to see some real safety breakthroughs, for example, a cathode that doesn't run away thermally at all, or nonflammable electrolytes, but AFAIK they are not in the horizon. Remember that 99.99% of the battery science "breakthroughs" you read about in the media (traditional or even specialized tech media), are either complete scams, or massive exaggerations trying to lure in investor money. But, real 0.01% breakthroughs do happen given enough time and resources.
Speaking of material tech in this fairly recent video Andreas Hintennach; Chemist ,MD, PhD, Daimler AG, Mercedes-Benz, Group Research, Germany talks about post lithium materials such as sulfur and soldistate ceramics etc.
https://youtu.be/pxC2pciLl04?t=19 (https://youtu.be/pxC2pciLl04?t=19)
Mr Goodenough debunks Tesla batter management system, talks further on solid state battery techniques.
https://www.youtube.com/watch?v=kR8CESrigEg (https://www.youtube.com/watch?v=kR8CESrigEg)
-
Mr Goodenough debunks Tesla batter management system
No he doesn't - debunking means giving some factual representation with solid arguments. His arguments can be easily fact-checked. Does the Tesla's pack last only for two years, after which it needs a replacement, and is it true that the Tesla owners are all happily buying these replacements after just two years? Is Tesla's battery management system as expensive as the cells themselves? I think we all know the answer to the both questions. Especially the first one is easy. The second one might fool someone.
I have closely looked at the Model S hi res battery teardown photos. The BMS system is very typical, simple, and doesn't look too expensive. It has fairly low-current dissipative balancing, for example.
Of course, if you count the Tesla's thermal and shock management as "management" as well, which could be fair, then this claim isn't necessarily too far from the truth, so maybe it's politician-class "stretched truth" - something which IMO doesn't belong in science nor engineering. Anyway, Tesla's liquid cooling system has a lot of bent pipe in it and it does definitely cost something. I still doubt it's as much as the cells are. But I'm sure getting rid of it by having more robust battery tech would allow quite some savings!
As we all (should) know, "managing" 100 cells in parallel is basically no different to managing one cell. Managing 7000 cells is not 7000 times more complex nor expensive to managing 1 cell.
, talks further on solid state battery techniques.
While a big name in the industry history, Dr. Goodenough has lately been on public several times basically marketing certain technologies or certain companies. While this gives the discussed companies and technologies more credibility, it doen't guarantee much about how viable they are.
We all want to get rid of that flammable liquid electrolyte.
But, I do believe that some novel way will eventually succeed and make an actual breakthrough. People close to me (relatives and friends) knowing my work around battery tech, I get a lot of links of "look at this battery breakthrough!" on media, and I'm a bit tired about all of it :). So I'm sceptical by default, and even Dr. Gooenough isn't much of an argument in my view; I'm not a believer in authority anyway.
BTW, there is an important distinction between the battery engineers, and battery system engineers. The former - the chemists who design the cell technology - often don't like the idea of needing a BMS, and eagerly develop a cell which won't need one. The latter type, most of the time, tends to be huge believers in BMS systems, and are actually very happy with all their complexity, unreliability, etc., since that's what giving them their jobs. I share the battery scientists' sentiment there, and I would very much like to see a robust and simple cell.
-
No he doesn't - debunking means giving some factual representation with solid arguments. His arguments can be easily fact-checked.
Well, perhaps not debunks but laughs! :)
Does the Tesla's pack last only for two years, after which it needs a replacement, and is it true that the Tesla owners are all happily buying these replacements after just two years? Is Tesla's battery management system as expensive as the cells themselves? I think we all know the answer to the both questions. Especially the first one is easy. The second one might fool someone.
Then Mr goodenugh is not goodenough to Tesla.
I have closely looked at the Model S hi res battery teardown photos. The BMS system is very typical, simple, and doesn't look too expensive. It has fairly low-current dissipative balancing, for example.
Is is a safer one or one of those TI etc chip based unsafer ones?
Of course, if you count the Tesla's thermal and shock management as "management" as well, which could be fair, then this claim isn't necessarily too far from the truth. Tesla's liquid cooling system has a lot of pipe in it and it does definitely cost something. I still doubt it's as much as the cells are. But I'm sure getting rid of it by having more robust battery tech would allow quite some savings!
I dont he specifically talked about crash management just in general.
As we all (should) know, "managing" 100 cells in parallel is basically no different to managing one cell. Managing 7000 cells is not 7000 times more complex nor expensive to managing 1 cell.
Do Tesla refurb, replace individual defunkt cells or just offer entire new banks/plates when a customer get battery problems?
While a big name in the industry history, Mr. Goodenough has lately been on public several times basically marketing certain technologies or certain companies. While this gives the discussed companies and technologies more credibility, it doen't guarantee much about how viable they are.
Perhaps its more for him to push the paper and tests they published with the Portuguese scientist who invented that ion carrying glass or something.
But, I do believe that some novel way will eventually succeed and make an actual breakthrough. People close to me (relatives and friends) knowing my work around battery tech, I get a lot of links of "look at this battery breakthrough!" on media, and I'm a bit tired about all of it :).
Dont despair R/D is inevitable! Your friends dont know as much about battery they just want to rid themselves of
oily arabs, russians, americans and norwegians!
-
IIRC the IC markings were not visible in the teardown photos. You could try to google for them, they were widely available a few years back. It's possible it's some COTS management chip, or it might be custom.
Single cells wouldn't be replaced in a paralleled bank of cells for numerous reasons. It would be very expensive at least, and a slow process since the swapped cell must be charged/discharged to the same SoC. Packs also tend to be glued, dipped or sprayed with some kind of goo. The takeaway here is that if the cells are not reliable enough to build reliable packs, then just don't use them at all, you won't have a business with the failing cells. 1 serious cell failure within a TESLA pack probably is a showstopper for that pack. They can't remove it from the equation. If it shorts out completely, sure, it'll blow the fuse wire. If it starts to leak a tiny little bit, it's not going to matter. But anything else, and the pack is unusable as a whole. So, if you want to have returns below 1%, you need to have cell failures below 1/700000. This is very well possible given Panasonic's quality control.
The failure rate must be low enough that you can replace the complete pack - maybe one series module in some cases, but probably not with TESLA.
I'm sure they'll closely analyze any failures and keep the process in control with Panasonic (and their new plant).
-
I installed the chinesium parts... well one was good, the other one failed on me. :horse:
First this one:
(https://i.ebayimg.com/images/g/Um0AAOSwUPZbMQAf/s-l1600.jpg)
very pleased with the device, here the look-up table:
31,76V 0%
34,04V 10%
35,29V 20%
36,27V 30%
36,88V 40%
37,50V 50%
38,10V 60%
38,82V 70%
39,49V 80%
40,30V 90%
41,40V 100%
there is also a "secret menu" which let you set the back light, standby and cycle between % and V display plus other two settings which I did not understand (of course where I can find some pdf on it???).
Anyway it work(ed)(s) well. Double finger crossed.
This one:
(https://i.ebayimg.com/images/g/WC0AAOSw6FZbpinv/s-l1600.jpg)
was at the end a complete fail.
First the heat sink was 0,25mm away from the mosfet... I had to rework that.
Then I tested it with about 10A load and it was working ok, also the charging was ok. It even cut the output at about 30V in the discharge test.
Also the self discharging test was ok with that BMS:
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=557782;image)
Of course after the final soldering actions and after closing the battery case, I did the final test. Boom, failed. I got only 17V open (0V with 1 to 10Aload) on the output with a 41V battery. Tried to disconnect, reconnect. Nothing. Toasted. I asked a refund to ebay, will see.
I just applied a protection spray for wet environment for electronic before the final test:
(https://www.rcbay.de/media/image/product/2964/md/wet-protect-21-400ml-feuchtigkeitsschutz-korrosionsschutz.jpg)
but I don't think it could kill that stupid board. Well it failed once also before (same shit) but when I connected the charger, it fixed it.
Anyway there were signs of hand soldering rework on that board.... it did not inspired any good from the beginning.
Finally the B- cable on the sense cell connector was a straight short to the big B- pad... why then a cable there? To externally connect what is already connected on the board? :palm:
(PS: I used that cable to connect the battery gauge, so I was happy for that no sense... >:D)
Trouble-shooting a 8€ chinesium board? #FORGETABOUTIT
I got this one:
(https://i.ebayimg.com/images/g/kOwAAOSwODFaVlRl/s-l1600.jpg)
eBay auction: #273019581068
please God send me something which works fine this time.
Anyway I solved the mechanical problem with some bad ass hot air melting process, here some pictures of the Frankenstein battery.
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=557758;image)
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=557764;image)
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=557776;image)
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=557788;image)
-
So I did it!
Not a pretty job, but holy cows that bike now flies. The battery is so much lighter and equal if not more powerful. Li-Ion for president!
My mom is scared to ride it >:D ...
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=693750;image)
(https://www.eevblog.com/forum/projects/panasonic-nky467b2-36v-15ah-540wh-reverse-eng-burning-my-fathers-ass/?action=dlattach;attach=693756;image)