In this article on battery maintenance, Rick Tressler, president and CEO of Rick Tressler LLC, introduces a single load step, constant current capacity testing, and explores what it is, how it works and why you should be doing it.
Stationary battery systems require periodic maintenance. Measurement of various operating parameters including cell voltage, temperature, internal ohmic values, connection resistance, etcetera is needed on a regular basis. There is no avoiding it.
But can these readings provide insight into a quantitative value as it relates to battery capacity? The answer is no. If a user wants to know the capacity of a battery, there is only one thing to do: perform a capacity test. There is no substitute.
What is capacity testing? Also known as load testing, or discharge testing, capacity testing is a dynamic test whereby a simulated load (in amperes or watts) is imposed on the battery system for a specified time. The discharge continues to a defined end-of-discharge (EOD) voltage, referencing a measured battery temperature taken at the start of the test. This type of test allows for the determination of actual capacity. Further, it permits the comparison of the rated capacity to the test result.
When conducted at recommended intervals, trends can be established that indicate capacity loss as the battery ages. When performed such that all cells/units are monitored, problem cells can be identified in real-time and replaced by service personnel as needed post-test. Testing is normally performed with the subject battery isolated from the DC system with a temporary battery connected to support the load during both testing and recharge cycles.
The Institute of Electrical and Electronics Engineers (IEEE) recommended practices relating to capacity testing of lead-acid and nickel-cadmium batteries are the same documents that provide information relating to maintenance. Capacity testing is part of a successful and complete maintenance program, so it stands to reason that testing practices are covered in these documents. Of course, the battery’s manufacturer can be consulted about testing processes and regimes; most support the IEEE recommended practices, guides and standards.
Testing intervals
A new battery— except for nickel-cadmium batteries— should be tested as soon as practical after installation and commissioning. This type of test is known as an acceptance test. The battery technologies and their applicable IEEE testing and maintenance standards are:
- Vented lead-acid (VLA)— IEEE 450-2020
- Valve regulated lead-acid (VRLA)— IEEE 1188-2005 (in revision at publication)
- Vented nickel cadmium— IEEE 1106-2015
The test establishes the initial capacity to which future test results are compared. Acceptance tests for nickel-cadmium batteries will be discussed later in the article.
Acceptance tests can be conducted at the battery manufacturing facility prior to shipment, after installation, or both. Note that factory acceptance testing usually carries a fee that is based on the number of cells to be tested and test duration.
With vented lead-acid (VLA) batteries, a follow-up test should be undertaken about two years after the acceptance test. This and all future tests are known as performance tests. These tests should be performed at intervals not to exceed 25% of the expected service life of the battery for the application. In other words, if a battery is expected to provide 20 years of service, the battery should be tested every five years starting after the follow-up test until degradation is observed.
Battery degradation is defined in three ways:
- when the battery has reached 85% of design life
- when capacity is below 90% of rated
- when capacity has decreased more than 10% since the last test.
When any of these are observed, it is recommended to then test at one-year intervals until capacity reaches 80% of rated. Once the battery has reached this capacity it is time to replace it.
As for valve-regulated lead-acid batteries (VRLA), the testing schedule is similar, however, the follow-up performance test at two years is not required. After an acceptance test, all future test intervals should not exceed 25% of the expected service life of the battery for the application, or two years, whichever is less, until degradation is observed. The definition of degradation as explained for VLA batteries also applies to VRLA batteries. Once degradation is observed, annual testing should be conducted until capacity reaches 80% of rated. Once at that capacity, the battery should be replaced.
Lastly, there are vented nickel-cadmium batteries. The acceptance test schedule is a bit different. Rather than testing immediately after installation, the test should be performed after the battery has been on float for at least twelve weeks without discharging. This is done to eliminate “float effect” which is a phenomenon unique to nickel-cadmium batteries that will affect capacity test results.
The first performance test should be performed within two years of being placed into service. Additional performance tests should be conducted at five-year intervals until excessive capacity loss is observed. In nickel-cadmium batteries, excessive capacity loss is defined as more than an average of 1.5% per year of rated capacity from that measured during the previous performance test. Annual performance tests should be made on any battery that shows signs of excessive capacity loss.
Performing capacity testing
Fundamentally, there are two ways to accomplish capacity testing: use a testing service company or do it in-house.
Be forewarned, if users do not have knowledgeable staff and or equipment to perform such tests, they should be referred to those equipped with trained personnel to perform them. Equipment acquisition is costly and requires considerable training. Test personnel must possess in-depth knowledge of batteries. That said, an end-user should understand the fundamentals, which this article sets out.
Today, most testing is done using microprocessor-based systems that include all necessary hardware and software to perform a variety of tests consistent with industry standards. Some systems use a Windows PC to run the test while others integrate a control system directly into the test set hardware. Systems requiring a standalone computer generally employ a computer running a Windows OS.
Depending on the size of the battery, the choice in systems can range from several models to just one or two. It is truly a niche market. Manually operated load banks and manual data collection (read pen and paper) are not practical. Automated data collection is the standard. The newest testing equipment even offers wireless sensors that report to the data acquisition and logging hardware. So, what is typically included in the hardware?
A major component of a testing system includes a load bank, consisting of a series of precision high-power resistors that draw current from the battery as the simulated load. These are rated for continuous duty and are air-cooled. Fig 1 illustrates such a load bank. For large battery systems, paralleled load banks may be required. It is noteworthy that small system manufacturers can offer their product in a single hardware configuration for easy portability at the expense of reduced power handling capacity.
The data collection system can be considered as home base for all battery connections. It takes all the analogue signals such as cell voltage, overall battery voltage, optional cable voltage drop from inter-tier and other cables as well as discharge current. It sends this data to an embedded microprocessor or the system computer, depending on the product, for conversion and logging during a test.
When the data collection unit is employed, the system PC usually connects via a serial interface, more commonly a USB connection (see Fig 2 for an example of a stand-alone data collection unit). The rear of the unit is where all connections are made except for the load cables, which are directly made between the battery and the load bank. The front has but a single AC power indicator LED (Fig 2a is an example of a self-contained data collection unit with controls, display, load bank and wireless remote sensor feature).
For most systems in use today, cell voltage sensing leads are required to monitor each cell or a multi-cell unit such as those comprised of 12-volt VRLA units. Inter-tier/rack/aisle cable voltage drop sense leads may also be available. Sense leads may require extension kits for large systems on long racks. Each cell is monitored throughout the test to detect a potentially failing battery down to the cell level. There is a lot of wire in these systems. An alligator clip method of cell attachment is usually utilised (see Fig 3). Overall battery voltage and discharge current must also be sensed, reported, and logged.
If a Windows-based PC is used to control the test and log data, the user may elect to purchase one with the system or use their own. It is important the test-system manufacturer’s specifications for a computer are met to ensure proper operation. In these types of systems, proprietary software must be installed to communicate to the data collection unit. Test data files must be backed up along with the need to keep firmware and software updated.
The test specification
It is essential to perform a valid battery test. Guidance from the battery manufacturer is a good place to start.
As an example, I’ll use an electric utility substation battery to establish a test specification. The battery consists of a 200 ampere-hour (AH) system with 60 cells. The battery is a VLA type with a nominal specific gravity of 1.215 and is designed to support the station for eight hours. The end of discharge voltage for the DC system is 105 volts. This correlates to 1.75 volts per cell (VPC) average.
Battery performance is referenced to 25°C (77°F). All that is needed now is the discharge current. It is noteworthy that even though the battery is sized for eight hours, it is not necessary to test it for eight hours. Such a test removes considerable energy from the battery. This results in a longer than desired recharge time. Common and accepted industry practice is to test a battery in this application for a shorter time using the corresponding discharge current (rate) for that time.
A test run at the manufacturer’s published three-hour rate of 1.75 VPC (105 V) is typical. Depending on the user or service company, the test may range from 1 to 5 hours with three hours being the norm. There are two main benefits with this approach, First, the test time is reduced, thus, less time is required on-site for a typical crew of two. This is a considerable cost saving. Second, fewer ampere-hours are removed from the battery.
While testing in accordance with IEEE recommended standards does not damage a battery or reduce its service life, a battery tested at the three-hour rate will provide good insight into how it will perform for a longer time at a lower rate. A battery in this application should not be tested at less than the one-hour rate. Such testing is reserved for batteries used in high rate, short duration applications such as UPS in data centres, where batteries are sized for as little as 5 to 15 minutes.
How is the discharge rate determined? The battery manufacturer data sheet is required (See Table 1). This shows the available performance for the subject battery. Testing at the eight-hour rate removes 200Ah while testing at the three-hour rate removes just 154.2Ah with a test time saving of five hours. Ampere hours are simply amperes multiplied by hours, referencing a specific time, discharge rate, final voltage, and temperature.
To translate the test specification, the battery will be tested at the published three-hour rate of 51.4A to an EOD voltage of 105V measured at the battery’s main positive and negative terminals.
In a perfect world, the battery would be at 25°C but that is a rare occasion. When the test time reaches three hours, the battery voltage would be 105V for a capacity of 100%. Test results frequently result in a capacity of more than 100%, which is why it is important the test be run to the EOD voltage rather than terminating the test at 100% capacity. Actual capacity needs to be demonstrated to allow for trending of capacity loss throughout service life.
Avoiding testing mistakes
A battery should not be tested unless the general condition is known. Don’t leave a battery in worse condition than it was found. For batteries that have not been under a routine maintenance program, a testing company will usually require a detailed inspection to be performed and brought into compliance until a test can be performed. Much of the time, a battery is tested shortly after being isolated from the system it supports. This means a temporary battery that meets the requirements of the system must be connected before the battery to be tested can be disconnected. It is unwise to leave the DC system unprotected without a battery.
After establishing all necessary test equipment connections to the now isolated battery, and the test specification has been verified and loaded into the test system, the test begins. The system runs in automatic mode unless manual control or other intervention is required. The system monitors the test and logs data to the system computer or integral microprocessor. The data usually includes overall battery voltage, individual cell voltages, inter-tier/rack/aisle cable voltage drops, discharge current and elapsed time.
Initially, cell voltages will drop quickly, then settle to a slower, uniform decline as the discharge progresses. The test technician should monitor all dynamics of the test in case one or more cell voltages drop precipitously low. Early in a test, usually prior to reaching 90-95% of the programmed test time, one, and only one, cell may be bypassed.
The test is paused, the load is removed, and the cell is disconnected, and replaced with suitable bypass cables. The test is resumed, and a new EOD voltage is calculated. In the case of our example battery, which is now 59 cells, that becomes 103.2 volts. If a second cell fails the test before reaching 90% of the programmed test time, the test is over. If the test is at 90% or greater for the programmed test time, it is recommended to continue the test to the original end-of-discharge voltage. The bypassing of a cell does not apply to nickel-cadmium batteries; IEEE 1106 should be consulted for details on this.
Using the example battery, a test is normally terminated when the overall battery voltage reaches the programmed specification EOD voltage. Do not terminate a test just because the battery reaches 100% capacity and is still above the EOD voltage. This is done too often and invalidates the test.
Another mistake is to terminate the test when the first cell reaches the average EOD of 1.75 volts. The EOD voltage is the ‘battery’ voltage, not the individual cell voltage. This is another mistake that invalidates the test. The actual cell voltages vary slightly from the average. On a per-cell level, the value is an average, not an absolute.
Capacity calculation methods
With a completed test now in the proverbial rear-view mirror, demonstrated capacity must be calculated. There are two ways to calculate it: the rate adjusted method, and the time adjusted method.
When testing at durations of less than one hour, the rate adjusted method should always be used. For durations of one hour and longer, either the rate adjusted, or time adjusted method may be used. The time-adjusted method is a preferred method because it is easier to calculate but still accurate. A detailed explanation with examples of the rate adjusted method can be found in IEEE 450 and 1188. See the time adjusted method that follows.
To calculate percent capacity using the time adjusted method, divide the actual discharge time by the rated time and multiply by 100. This assumes the battery to be at 25°C. The formula is expressed below.
% Capacity = Ta / Ts x 100
Where:
Ta = actual test time
Ts = rated test time
The example battery discussed was tested at the three-hour rate at 51.4A. The battery reached EOD voltage of 105V at 02:42:00 or 2.7 hours.
% capacity = 2.7/3 x 100
Therefore, capacity is 90% and the battery passed the test.
A slightly modified formula using a temperature correction factor (K) from the applicable IEEE standard must be used when the battery temperature is not 25°C. Refer to IEEE 450 or 1188 as applicable detailed information and a table of Kt factors.
% Capacity @ 25°C = [Ta / (Ts x Kt)] x 100
Where:
Ta = actual test time
Ts = rated test time
Kt = temperature correction factor from the applicable IEEE standard
The example battery discussed was tested at the three-hour rate at 51.4A. In the example below, the battery reached EOD voltage of 105 V at 02:54:00 or 2.9 hours. The battery temperature was 35°C.
% Capacity @ 25°C = [2.9 / (3 x 1.090)] x 100
The capacity corrected to 25°C is 88.7% and the battery passed the test. Failure to use Kt results in a capacity calculation of 96.7%, an error of 8%.
Why it should be done
The only way to know the capacity of a battery is to perform a capacity test under specific test conditions. Tests should be conducted periodically based on the applicable IEEE recommended practice. Routine inspection readings such as temperature, cell float voltage, internal ohmic values (resistance, impedance, conductance) float current, etcetera, while good indicators of overall condition, cannot be quantified to percent capacity or capacity degradation. An established maintenance plan coupled with periodic capacity testing is the recommended approach to achieve maximum battery system reliability.