81st GM Speaker Joe Ball


The following presentation was delivered at the 81st General Meeting on May 14, 2012, by Joseph F. Ball. It has been edited for content and phrasing. Mr. Ball’s accompanying slide presentation can be viewed here.
Joseph Ball is director of the National Board Pressure Relief Department. As such he is responsible for pressure relief device new construction and repair certification programs and test lab operations. Mr. Ball also conducts shop reviews of valve and rupture disk manufacturers and assemblers, repair organizations, and test laboratories. A professional engineer in the state of Ohio, Mr. Ball is a National Board instructor for A and VR courses. Additionally, he is secretary of the NBIC Subcommittee on Pressure Relief Devices. He is also a member of the ASME Subcommittee on Safety Valve General Requirements, and the PTC-25 Pressure Relief Devices Committee.
Many people probably know that the National Board operates a capacity certification program and a valve repair program where we test many pressure relief devices. I want to talk about the background and requirements of that program and the applicability of our testing data to reliability for industry. As a result of years of testing, we have accumulated a good deal of data that helps us analyze the quality and reliability of the equipment. We want to use the data to determine what industry can expect when they receive a certified pressure relief device, pressure relief valve, or rupture disk device.  
Slides 2-3: The new product certification program starts with a manufacturer. They will test a design and establish an initial rating for it. Next is a production test – a test of two sample pressure relief devices. This test previously occurred on a five-year cycle, but now it's on a six-year cycle to line up better with the ASME code stamps’ three-year renewal cycle.
Two samples are selected every six years and tested at an ASME-certified lab. All test requirements come from the ASME Boiler and Pressure Vessel Code. Through the code, the National Board has been designated the responsibility to manage and run that certification program using the ASME boiler code rules. National Board inspectors travel to the manufacturers’ sites. We also deal with valve assemblers and rupture disk manufacturers. A big part of their responsibility when visiting the site is to look at the manufacturing, assembly, and test procedures, and make sure we get a good representative sample of what that manufacturer is capable of doing.
In some cases we will actually take valves right from shelf stock, particularly from manufacturers that mass produce their product and have large quantities of valves in stock. One selection technique is to go into the warehouse and say, “Give me one of those and one of those.” Sometimes they will dust off the box, but we are trying to get an accurate sample. Sometimes they are testing a valve they are building for the inspector while he's there, but in that case they are looking at the assembly and test procedures and trying to see if it’s a good representative sample.
The program for individual design is not meant to be statistical in nature, so we are not testing a certain percentage of devices: just those two products every six years. It may be more than two if a test failure occurs. If there is a failure, the manufacturer has to test two additional samples. If they get past that test and still have a problem, a formal corrective action program is implemented. They have to analyze their failure, report on what happened (the cause), and explain what corrective actions they will take. And potentially, a manufacturer could actually lose the ability to put the code stamp on their product, so it's an important test. The manufacturers have a lot riding on it because if the product passes, they can produce that valve for a six-year period of time. The tests are conducted at ASME/National Board-certified test labs, which include the National Board Testing Lab in Columbus, Ohio, but there are also about 10 other laboratories that are operated by valve manufacturers and rupture disk manufacturers.
We are involved in an ASME certification process. The labs all compare to one another to show that they can essentially attain the same results; they get the same measured capacity. And when any certification test is done at those labs, our inspector goes to witness the test and ensure it meets our requirements and procedures; so all tests are considered on the same basis. We have collected a lot of test data over the years and I looked for trends and patterns to analyze what the data was telling us.
Slide 4: The total number of tests included in my review was almost 22,000 (21,825) tests. Based on the data, we can make some conclusions on how good these products are overall.
Slide 5: We analyzed information starting with the year 2000. I chose that year because it gave me a lot of tests, but also the code rules for rupture disk certification went into effect in the '98 code, and by the year 2000, rupture disks were a well-certified device under ASME Code Section 8, and manufacturers had started to certify those devices. So it gave them a wider variety of equipment than we had seen before, because until then we were just testing pressure relief valves, and the non-reclosing devices were not well represented in the formal testing that we did, although they were being used out in the industry. It includes valve repair verification tests.
So although we talked about testing done on new product, as part of the valve repair certification process, the valve repair applicant has to repair several sample valves. Those are sent to a certified test lab and tested to exactly the same procedure that a new device is tested under. And while we always say a goal of the valve repair program is to return the valve to a like-new condition, as far as a user is concerned, if we get a repaired pressure relief valve, you should be able to expect that valve to do exactly what a new valve would do. It's a certified device to begin with. It's put through a program to inspect it, repair it as necessary, reset it, and get it back to that like-new condition. So I included all these valve repair verification tests. The typical test program for a repair outfit is doing a steam valve, an air valve, and a water valve, depending on the scope of work. And I threw those into the hopper; I treated those just like any other new valve that would be coming from the manufacturer.
We do tests for research and development projects and informational tests (what we call provisional testing). Provisional testing is the test a manufacturer does when they are first getting their design certified. Those tests are essentially prototypes. They are not valves that have truly gone into production, which doesn't happen until a two-valve test is performed. So none of that was included, because they are still tweaking their design and getting it to the point at which they think it's capable of being put through the final tests, and then to the production tests, which are the proof of the pudding.
It doesn't include what we call investigation tests. I will talk a little bit about some of that test data. We don't have a lot of it, but we do have enough to draw a couple conclusions, but it's not indicative of the new product going out the door. Some of the limitations of this information based on the economics represents the lower pressures and smaller sizes of valves. So what we and other test labs can do is go up to three-, four-inch inlet sizes. Our pressures and capacities are necessarily limited because to put a pressure relief valve through a full-flow test, you need a lot of support equipment. We run boilers (we previously ran large air compressors but we've switched to a nitrogen system) but we have a lot of capital tied up in that. And as you double the pressure, the costs go up exponentially. So our tests are limited; we don't do eight- and twelve-inch valves. The theory is that the valves are scaled up appropriately, but most of the testing we do is lower sizes and pressures. Hopefully that reflects more typical industrial equipment. We don't get super-critical boilers, but there are large numbers of boilers with a 150-pound safety valve, and we have covered those pretty well.
Slide 6: The other thing is what I call the cleanliness of the data. As I started going through it in detail, I ran across some glitches. And some show up in the graphs. They will look a little odd, so I will explain it. It’s based on the fact that information has been entered constantly over years and years, and people will code differently. We have a large database where we store all this test data, but how somebody designated a particular test sometimes showed up a little bit odd. I took out some of that, but data is coded by people and will never be quite as perfect as we want it to be.
Breakdown of numbers by ASME Code Section (Slides 7-8):
Section 1 – 13.6%
Section III – 12 tests (These Section III valves were likely nuclear valves that got repaired as part of a repair demonstration. We don't normally test many nuclear valves. They are the same physical equipment as you see in either Section 1 or Section 8.)
Section IV – 3.4 % (Hot water heater temperature and pressure relief valves are included.)
Section VIII– 83%.
This is the bulk of the work we do; a wide variety of all different types of Section 8 pressure relief devices. In regards to test medium used: steam is about 25 percent and air is almost half. Air represents all the industrial gases. And then water tests are at about 25 percent. That represents valves for liquid service.
Section 8 covers a wide variety of applications, so we did end up with about 10 percent of Section 8 steam applications. Almost 60 percent was air, another 31 percent was water.
This gives you an idea of the breakdown of the work we are doing. This was the first run at it.
The test outcomes are based on the criteria we put in our database. After we run a test, we give it a designation as to the outcome of the test. Eighty-five percent of the valves passed. The biggest ones are set pressure failures and failures of capacity, with the next biggest elements, and I will talk a little bit about each category.
Here are the raw results (Slides 9-10). We looked at the number of set pressure failures and asked where are these failures and how can they get us to the point where it's going to cause a problem in the pressurized equipment? So after I analyzed it on the different failure modes, I tried to look at what that number was actually telling us.
What I have is a plot of all the pressure relief valves, looking first at set pressure. Anything we called a failure is a failure to meet the ASME code set pressure tolerance. It's cut and dried. If you fail it, you have to retest. But what we tried to do is see how wide those failures were, their distribution, and where test failures might potentially affect the pressurized equipment where that valve might be installed.
Slides 11-12: This distribution is the measured set pressure over the nameplate set pressure. The numbers below the 100% line are valves that opened underneath the nameplate set pressure requirement. The little tilted spot in the middle is all the valves that passed. And then as we go up on the right-hand side, those are the valves that failed but where the set pressure was actually high. And that to me is the real area of concern. A valve that opens low indicates an operational issue. But what we don't want are valves that open high.
One glitch we discovered were a few valves that showed up at 400% above the set pressure. Normally we stop a test essentially at one and a half times the set pressure of the valve.
Occasionally we had some valves where set pressure was in bars, the test pressure was in psi, and if you tried to compare those numbers you didn’t necessarily get the right answer. Then what is the unsafe level? Where should we be concerned? A lot of times when people do this analysis, they will look at the hydrostatic test pressure. I do not believe that is conservative enough. That was good when a pressure vessel or boiler was manufactured.
We did an overload test on it and made sure it was good. But as that equipment goes into service over time, we know it's degraded and there are other things happening to it. The criteria I used for what I call ‘the real bad actors’ was all of the devices that were over 116 % of the nameplate set pressure. That is the Section 8 overpressure limit for a system with multiple pressure relief devices. And if we get above that, we also reference it in the NBIC as a place for taking up the valve for an inservice test. We are going to stop at that point. I'm concerned with anything above that. So that was my first set of data where I'm thinking these are really not the way we would want them to be.
Then we get to valve capacity. The capacity includes valves that didn't flow what they were rated to flow. A common cause is when valves are over pressured, they will hit a point where they get a secondary lift, and if someone doesn't fine tune that valve properly, secondary lift isn't quite achieved. It’s a test-and-tune issue; it's not necessarily a design issue, but people really understanding how the equipment works. We do have a number of liquid valves that showed up, and again, if a valve doesn't meet its rated capacity by the code-specified overpressure, it's a test failure. But we have a number of liquid valves that would open just above whatever the specified overpressure is, typically 10 percent for a Section 8 valve.
So we did have a number of comments for those. And this also includes rupture disks where the flow resistance, which is the Kr valveor minimum net flow area, did not meet specifications.
Slides 13-14: This is my first graph of the distribution of our valve capacity. These are valves that were designated as failures, and we have the measured capacity divided by what the valve is actually rated for. So you can see, it starts at zero and works its way up. It should end at one but I had two or three tests where we called it a capacity failure, but it actually flowed more than the nameplate. Every so often we do run across valves that are misidentified, and sometimes that can be an issue. What I used as a measuring stick was stuff that was less than half of what it was rated at. That tells me it's a valve that probably was not just a secondary lift issue. There was something really wrong with it. And that ended up being about a half percent of all the tests that were done.
We had about 1% of tests where we just didn't have a measured capacity. Many were liquid valves. So we will take that test (the set pressure on those is where it first starts to have a trickling flow) and we will keep increasing the pressure until that valve pops open. And if that pressure occurred more than 10% above the set pressure, that valve was a capacity failure. We had a number of those that were about 12-15% above the set point. That information goes to the manufacturer and it can help them figure out their problem. Those valves were not counted in the case where we knew where they opened. We know that once they did open, they would probably work fine. But if you don't hit it by that 10%, you have got to go through and do another test and improve your product to make it better. The rupture disk Kr number is used a little bit differently.
Slides 15-16: What I plotted here is the test Kr value over the certified resistance. And for a rupture disk, a low number is better. The Kr flow resistance is a number related to pressure drop. The lower it is, the better it is. We will have a certified value, and then anything below the number of one should have been a pass. Somehow I got a few at the beginning of the graph that aren't right either. It's not linear with pressure, so in looking at these, if we had a Kr value that was more than five times the rated value, it meant there was something really insufficient about how that rupture disk opened. So those are going to be counted in my ultimate total.
We get some that might be 10 or 20% higher, but that means we got some flow out of them. And for a rupture disk, once it's open, it's open, and it's probably going to do its job. We are concerned about those where the opening was not sufficient or we did have a small number. The ones that were more than five times the certified Kr value is about a tenth of a percent. You hate to see those where the disk didn't open. We took it up to as high a pressure as we thought was suitable, and that was about .08 percent of the total tests that we did. So those are the bad actors. Many of those were reverse-acting disks, which is a disk that is concave to pressure that went through but did not actually open up. And usually where that occurs is right at the bottom end of the pressure where there is not a lot of energy available and somebody probably pushed the limits of that design to make it get down to a low pressure. And also half a dozen we call minimum net flow area failures. That again is where the disk opened but just didn't quite have enough net area to achieve the capacity that it should have had.
Slide 17: We had about two percent of what we reported as blow-down failures. These are not included in the final analysis because in reality we look at blow-down ultimately as more of an operational issue. It's a concern to the user and to the boiler operator. It took two examples: one, we did have some Section 1 valves that were occasionally flagged. There is a minimum blow-down under Section 1. If it's less than that, again, it's a test failure and you have to address it. In most cases the capacity is probably fine in those valves.
Under Section 8 there is a requirement for manufacturers to demonstrate the capability to make certain valves meet a 7% blow-down requirement, and ones that fell in this category were valves that the capacity was fine because we actually do test that in that case, but they could not make that blow-down be less than 7%, which is the Section 8 specification. And that's only for certain designs that are deemed what we call adjustable.
Slide 18: For whatever reason, they couldn't adjust it. And in that case, the service condition you see is the valve stayed open a little bit longer. We had about .2 percent that we called failed operation. Mostly this is the adjustment of the lifting lever. It's a lack of attention to detail when the valve was being put together.
I had about a tenth of a percent of valves that we deemed incorrect lift. This is from valves that are certified primarily in Section 8 where they will have restricted lift design. The manufacturer could make a valve that would pass all the criteria, but if set incorrectly, there would be too much lift and the design would not meet capacity. We don't want somebody to pass because they put the valve together wrong.
Slide 19: So to summarize, I took what I classified as my bad actors: the set pressures that were more than 16 percent above the set pressure; the valve capacity failures that were less than half; the rupture disk Kr and rupture disk failures to open; and it all adds up to about one percent of our test total. And thus my initial estimate of what's the reliability of this equipment ultimately to do its overpressure protection job is that it comes to about 99 percent, which is good.
We also compare that to the actual test failures. They were higher, and it obviously shows there is still room for improvement in the industry. We deemed a number of tests ‘investigation tests.’ These were valves that either had been received from chief inspectors in a few cases or received from private organizations to do a test to see if it possibly contributed to an accident.
Slide 20: We had 130 tests. About half of them were not applicable, but about 37 valves actually passed. Some failed set pressures, some failed capacity, a few failed blow-down. In all of the tests I have personally witnessed, the majority of the problems were ultimately due to how it was applied or maintained. I can pick up a valve and look inside the inlet and tell you if it will pass or not. We will put it on the test stand and test it. But if it's all clogged up with rust or corrosion, or if the outlet is clogged with product, that valve is not capable of doing its job. And it's nothing to do with how it was built. It was ultimately how it was maintained when it was inservice.
Slides 21-22: Looking at all of this information, what can we take away from it? One, we do want to recognize the value of the ASME code/National Board capacity certification program. It ultimately is a program that makes the manufacturers and organizations toe the line. They have got to work hard to meet the standard, and the standard has some very tight tolerances that are associated with it. They are there for a reason: this is safety equipment. We want it to be available 100 percent of the time. But that tight margin does give us a little bit of leeway. For example, if we get a valve that opens at four percent above the set pressure, that's not a good thing and we will want the manufacturer to do better than that, but it still is well below the area where potentially we are going to have a problem when that valve goes into service.
Many times those test issues cause the organization to tighten up their procedures, and that's typically what we find when people have a problem. They report back on their corrective action. A lot of times it's training. People will look at the service manual and say, “Oh, I adjusted it this way.” They don't understand what those adjustments mean and don't make them properly. Perhaps they have to improve their calibration or setting techniques.
To increasing our test capabilities, the National Board Testing Lab has gone through an expansion project. We have up-rated our air testing capabilities specifically. I have also gone through some refitting of our steam system trying to improve our test capabilities. You may be hearing more about that over the next year or so.We are quite proud of the work that's been done and we hope to improve what we do.
Since we see companies on a very limited basis, users and inspecting agencies can help by providing feedback. Users in particular can give feedback to the suppliers, because they ultimately want their product to work, and if there is a problem, they need to know about it. A relatively new item has a warranty to cover it, so make sure you get a quality replacement product. Go to your regulatory authorities, and if you have questions about a certification issue, come to us at the National Board (that’s our responsibility) and hopefully we can try to resolve the problem and see how we can help the company.
And then, finally, the statistics that we are looking at are new equipment going into service. And the one thing that we don't account for in this information, other than the stuff we get from the investigation tests, is now once it goes into service, it's not like wine, it doesn't get better with age. Ultimately we need to inspect it, we need to look at this equipment periodically and make sure that when the inspections are done, they are not just a visual inspection, but we want to know for pressure relief devices, there needs to be testing associated with that to assure the device is working properly.
So I want to make sure that those inservice inspections are done. That's what this whole group is all about, and the related standards, the NBIC, give us the information we need to do those inspections better. We then use the data that comes from that inspection to really set up realistic inspection intervals. The NBIC does have some inspection intervals that are specified, but for a lot of the process work, ultimately we say per inspection history.
Make sure you have good inspection history to know how often we should be looking at these pressure relief devices. Some industry standards have a ten-year inspection period, which to me is way too long, particularly in a lot of the more aggressive services. You really need to look at the pressure relief devices more often because of the important function that they serve. But this preliminary data gives you an idea of how good a valve is once it goes into service, at least from a new product perspective. However, because of the data quirks, I wouldn't necessarily quote any of this yet and put it into a publication.
We are going to keep looking at the information and refine it by taking out those data anomalies. But this gives us some idea of how good the tested equipment is when it actually goes into service.