When dealing with potentially explosive products it is important to know the characteristics of what you are dealing with. Power management innovator Chris Hale looks at the various methods that manufacturers need to adopt to ensure safe products hit the market.
Fig 1 (above): Pack Thermal gradient with forced-air cooling (ref:semanticscholar.org)
Battery technology expos are great places to get insight into up-and-coming technology. Over the years, I have had many great discussions about safety and reliability, with both new and established battery companies.
Unfortunately, there have been some users who have naively put their faith and reliance entirely on a cell manufacturer’s datasheets. In my view this leads to the risk of a situation that pushes the limits, in more than one area, leading to unpredictable, premature failure. Of course, over the years you get to see a fair share of battery failures of which some are catastrophic, some impacting significantly on customer confidence and others resulting in high rates of warranty returns and repair or replacement costs.
A battery management system (BMS), at a basic level, will provide protection against extreme limits being reached— over-temperature, over-voltage, over-discharge, or over-current, for example. These limits are often given on a cell datasheet and generally taken as a good-enough basis to provide adequate protection.
The sceptic in me always takes a view on any performance data and claims from data sheets (cell or battery) and with good reason. I’ve seen the aftermath of testing within the extremes of a cell’s specification with the cells eventually going into thermal runaway and setting fire to the pack. So herein lie the questions: what can we trust? Can I be sure that the next battery I test– even if it passes— won’t still result in some form of failure down the line?
The datasheet says I can charge to 4.35V, have an operating temperature range of -20-60°C, have the capability to charge in 15 minutes and will offer 1,000 cycles. OK so far. But, at what temperature can I start my 15-minute charge without going over-temperature before the charge completes? How long can I hold the battery at 4.35V without significant degradation? And what happens if I do this 100 times on the trot? Or 1,000? What is the impact of a 15-minute charge at the lowest possible temperature followed by a discharge at the fastest rate in a freezer at -20°C for 100 times? I can be sure I won’t be getting the 1,000 cycles. But is it just capacity loss that I need to worry about?
At high current, there may be losses associated with the choice of cell interconnects, such as nickel strips, producing voltage gradients in parallel-connected cells. Thermal gradients may also be generated across the pack due to self-heating and non-uniform heat extraction, all compounded at low temperatures when the cells have a higher internal resistance. With these gradients, both the number and location of temperature measuring sensors and the cell voltage sense points need to be considered.
The above may well be obvious to most good battery manufacturers, but what if you throw into the mix the impact of a poor weld or an intermittent connection?
If you consider there were more than 130 e-bike and e-scooter battery fires in just over a year, in London, UK alone according to the capital’s newspaper the Evening Standard. Is it a lack of proper testing, poor BMS management, or user abuse? Or is it simply a function of relying on ambitious datasheets.
The need for characterisation
Aside from the issues just described, if you throw into the mix cell balancing, active thermal management and accurate, reliable capacity gauging, the need for more advanced testing becomes apparent. So, herein lies the challenge of what testing should be carried out and the inevitable compromise of time, cost, and value in understanding the battery performance for the given application. Scheduling a programme of tests at cell and pack level should include validation on ageing tests, thermal testing, drive cycle testing, abuse and vibration testing, performance monitoring and data collection. But what are the best methods to achieve this?
Physical testing is ultimately necessary to validate battery performance, however, there are other useful tools to aid in understanding battery performance and reduce the overall testing time. Such tools could be a hardware-in-the-loop simulation (HIL)— great for testing the BMS design, and showing how the controller responds in real-time to realistic virtual stimuli. Another useful tool is simulation and modelling— computer models for predicting battery behaviour from thermal response to ageing and system losses; to provide a better understanding of mechanical and electrochemical changes through the life of the battery.
Self-learning battery models
The need for characterising the performance of a battery has been discussed from a functional point of view as part of development or product testing. However, there are also methods that require both a set of initial characterisation inputs and real-time inputs, such as the use of Kalman filters. For example, an algorithm that uses a series of measurements, observed over time— including noise and other inaccuracies— to produce estimates of unknown variables, will tend to become more accurate than any method using single measurements alone.
Battery modelling aids this approach, whereby an impulse response under a given load can be determined as a function of RC networks, the basic principle shown in Fig 2. The more the order of the model increases, the more its precision increases. In fact, the increase in the number of RC networks allows for simulating the diffusion of lithium ions with more precision.

The battery modelling helps to relate the features and principles discussed earlier and needs to consider additional factors such as ageing and temperature, which is where the Kalman filter approach comes into its element. However, in broad terms, the principles are fairly straightforward as demonstrated by the pulse response in Figs 3a and 3b, which clearly shows both the cell’s ohmic resistance and diffusion resistance components for first and second-order models.


Additionally, as described earlier, the relaxation period shown in Fig 4 demonstrates the open-circuit voltage (OCV) before a load pulse is applied and the OCV after the load is removed, for different states of charge. The time taken for the OCV to return to its original value is taken as the relaxation period, but note the difference between 95% SoC and 5%.
Although the method above describes one approach to modelling, there are many being developed, each with varying advantages and disadvantages as can be summarised in Table 1.
It is an art and shouldn’t be taken lightly that lithium-ion is chemistry with great potential, but one that also has an inherent dark side and a fair number of challenges to ensure safe, reliable, and robust battery packs.

Time and Cost
Probably the key reason why you hear of reliability or safety failures is simply down to the cost and time it takes to carry out adequate testing. Whether testing time can be reduced through simulation or not, in a nutshell, the key challenges can be summarised as follows:
- Battery testing takes a long time. Generally speaking, chemical reactions can’t be rushed. Accelerating testing will likely form different chemical reactions to those of a battery cycled slowly. Understanding battery behaviour can involve a vast array of test scenarios i.e charge/discharge/temperature combinations at the various rates and drive cycles. To give an example, EV manufacturers running accelerated life-cycle tests can still take six months or more to complete.
- Premature battery failures and test errors. With the complexity of battery testing, errors are likely to happen, and tests may need to be re-run, perhaps several times. The number of battery samples and potential of early failures— either by damage during testing or defects in ‘prototype’ builds or test equipment— also needs to be considered. Is a sample of three optimal?
- Safety, including validating cell claims. If the batteries are tested to the extremes of their performance, there is naturally going to be a higher risk of critical failure from over-voltage, short circuit or thermal runaway. Putting this into context, I have seen the result of testing high voltage cells within a battery pack, cycled at their maximum operating temperature range, and ultimately going into thermal runaway— destroying the battery, and causing a considerable amount of damage to the test centre. The battery was cycled in accordance with limits defined by the cell manufacturer and had the battery not been tested, critical failures could have occurred in conditions that would have been specified to a customer as outer operating limits.
- Cost of test equipment or outsourcing. At the end of the day, batteries can and do catch fire (especially if testing to extremes) so fire containment test rooms will need to be built but, unless outsourcing, there is an additional cost with battery cyclers and environmental test chambers.
- Data analysis. It’s one thing logging the data, however, there is a huge amount of time and resources required to analyse and process the data. Several months can be used up in the implementation of outputs from test data.
- Change the cell, and repeat the tests. Even similarly specified cells from different manufacturers (or the next generation, slightly higher capacity from the same manufacturer) will require a repeat of many of the safety and characterisation tests. Even subtle differences within the cell chemistries will throw out many of the results from previous tests.
Advanced Battery Management
Of course, characterising a battery’s performance and understanding its failure mechanisms also lends itself to improving advanced features, such as more accurate capacity gauging or state-of-health (SoH) and state-of-failure (SoF) determination.
When it comes to an application that relies heavily on accurate capacity gauging, for example, the level of sophistication required within the BMS naturally goes up. Simple capacity gauging and coulomb counting will only go so far and will often require a process of recalibrating at points throughout the battery life. Given the number of factors influencing accurate capacity gauging, this is more a process of estimation and remains the subject of active research by many to find better methods for refining the SoC estimates.
Depending on the extent of battery characterisation testing that is willing to be undertaken, this may dictate the ideal route to be taken. Tests range from very sophisticated approaches based on fuzzy logic and adaptive filters to dedicated integrated circuits (IC)— such as TI’s BQ2700, which uses a combination of open-circuit voltage measurement and coulomb counting; battery management system measurements of voltage, current and temperature; or dedicated electrochemical impedance spectroscopy (EIS) IC’s such as analogue devices AD5940, not to mention model-based estimation methods.
State of charge basics
Whether using data sheets or characterisation curves derived from empirical testing, SoC techniques using voltage-based methods measure battery voltage and relate that value to a charge level based on a set of profiling curves. Battery voltage measurements can be used both on load (typically through a process of normalisation to an ‘off load’ profile) and offload as an OCV. Again, the level of accuracy across the range of operating temperatures and loads will be dependent on how well-fitting curves can be adjusted through the life of the battery.
The typical variations in voltage value with respect to capacity whilst on load can be demonstrated in Figs 5 and 6.


What needs to be borne in mind is, that both profiles will change over the life of the battery as the cells age; not to mention any additional complications arising from pack construction and rates of cell self-heating— especially if there are thermal gradients across the pack typically associated through high-rate discharges or the choice of thermal management. Forced air cooling of cells through the pack is a good example of such gradients (and the discussion on the number and placement of temperature sensors).

On top of the discharge temperature impact on available capacity, the OCV is also impacted by temperature, leading to additional error potential as shown in Fig 7. As the OCV is often used as the start point for coulomb counting, (the process of updating the remaining capacity with respect to the current being drawn), voltage reading accuracy and proper temperature compensation need to also be taken into consideration.
But this is still not all, as a battery is charged or discharged, there is a process of charge diffusion, an effect of non-uniform distribution of ions in the electrolyte produced during current flow and the related time it takes for the battery chemistry to equilibrate. A feature highlighted previously in Fig 4, requires a period of settling, off load, in order to equilibrate.
Off-the-shelf – IC methods
So, DIY or not DIY? As demonstrated above, voltage and current based SoC methods programmed into a sophisticated BMS will require a fair amount of characterisation. However, there are other options such as EIS ICs which have limitations in a real-world implementation. More appropriately, there are some IC manufacturers providing SoC measurements based on their own proprietary methods, often built around models of lithium-ion cell performance. These devices, such as Maxims DS2780, base SoC estimates on a variety of cell and circuit characteristics such as current, voltage, temperature, charge termination point, age estimators and learn functions that adapt overuse to changing parameters as the battery is cycled.

The DS2780, for example, stores cell characteristics using a piece-wise linear model comprising three curves (Fig 8). The curves represent characterisation points across a range of temperatures, as the temperature of the cells plays a significant part in available capacity.
Full: The DS2780 reconstructs the full line from a characteristics table to estimate the full capacity of the battery at a given temperature.
Active Empty: Constitutes the end-of-discharge (EoD) point based on temperature for given loads. The loads are typically the higher-level operating currents.
Standby empty: Constitutes the EoD point for standby current loads for given temperatures, i.e. the point that the battery can no longer support minimal activity functions. In this application, the cell model is constructed with all points normalised to a fully charged state at +40°C (typically the temperature at which the maximum capacity can be achieved from the cell).
Whichever method is used, knowledge of the battery performance and changes over its life are crucial to maintaining reliability. Only by putting a pack through its paces will its nuances be understood