Reviewing UOA Data
Used oil analyses (UOAs) are tools. And like most tools, they can either be properly used or misused, depending upon the application, the user, the surrounding conditions, etc.=
There are already many good articles and publications in existence that tell us how to interpret the information we see in a UOA report; they speak to what elements and physical properties are indicative of certain components and conditions. It is not the intent of this article to discuss or contradict that type of information. Rather, it is the intent of this information to supplement those other articles. Most of those articles fail to address one very important topic: statistical normalcy. What is “normal” in a data set represents the typical average values and expected variation within that group. In short, it’s a matter of how to view a series of UOAs and see how results can shape our view of a healthy or ailing piece of equipment and the viability of continued lube service.
Without going deep into statistical analysis theory and education, I’ll just present what is important and helpful in understanding the data we get from UOA resources, so that reasonable decisions can be made and erroneous conclusions can be avoided. Many people have heard of the “Six-Sigma” approach using statistics, and other similar concepts. These are applicable to the world of lubricants as much as any other topic. I’ll apply these concepts to the interpretation of several series of UOAs, using real world examples to illustrate.
First, understand that statistical analysis can be applied in both small and large view-point formats. Typically these are referred to as micro-analysis and macro-analysis. I’ll differentiate the two concepts, with specific intent to address how these tools are useful in interpreting UOAs. In either case, and with rare exception, protocol dictates that one needs 30 or more samples of data to establish reasonably reliable results; it can be done with slightly less, but the data is not nearly as reliable and mathematical problems arise. Further, you cannot meld one methodology into the other for the sake of accumulating enough data; the quantities must be self-supporting. You certainly might have one or more sub-sets of full micro-data in large macro-data populations, but you should not blend the two to achieve a minimum set. In short, you cannot accumulate enough data simply by adding it from differing methodologies or duplicating it, to satisfy the minimum set requirement.
Micro analysis looks at one specific entity, and lets data develop as inputs affect it. An example of this would be doing a series of UOAs on one engine, using a consistent brand/grade of lube, with reasonably consistent usage patterns. As much as practical, all inputs (lube, fuel, filtration, UOA sample cycle, etc) are held constant (or with minimal change), so that we can see the natural development of information. We do this to establish ranges and allow for any trends to develop. Over time, this methodology can be used to decide which product or process excels over another for any single specific application. It is very important to note that even when experiencing extremely consistent conditional and resource inputs, there is variation, even when the process is in control. We need a great deal of data from this single source to well define what is average and normal; it takes much time, money and patience to get there.
Macro analysis looks at not one entity, but all those in a desired grouping, and models not the individual effects, but rather details or predicts the behavior (results) of the mass population reaction to changing conditions (multiple inputs). Here, we can look at a large group of UOAs that represent a piece of equipment (engine, gearbox, differential, transmission, etc.) from different points of origin, and seek out what is “normal” across a broad base of applications. This approach is frequently used; it is predominant in the development of many products, from medical trials, to common electronics, to appliances, to automobiles, to consumable items like toothpaste and drinking water. The list is nearly endless as to how macro analysis can be applied. And as long as the precepts and limitations are understood, proper conclusions can be made. Macro analysis comes much quicker because multiple sources are accepted. Caution must be given, however, to make sure that illogical conclusions are not drawn, based upon false presumptions, or in confusing correlation with causation.
Please note that for the sake of consistency, expediency, and readability, I often round values up or down to make them presentable for quick consumption. Data can lose its human value at times when the minutia of numbers overwhelms the message the data is trying to convey.
All that in mind, now we’re on to the fun stuff …
Where the data comes from …
I have been collecting UOAs for many years from various sources on all kinds of equipment. I have also received a great amount of UOA data from Blackstone Laboratories. They were generous enough to cooperate in this endeavor, and Ryan Stark was particularly helpful in getting information needed to make several key examples. Do not worry; not one customer profile was compromised. I was only given raw data, and not any confidential personal information. Blackstone is very good at protecting client privacy, and this endeavor is no different. Additionally, I am able to add in some UOA data from other sources as well.
Let’s look at some examples of popular engines. I’ll use these to show how data is developed, and how care must be taken to not let data run amok. I’ll show how “universal averages” (the mean) should be used, and how “variance” (the standard deviation) affects the unrealized story. I’ll indicate what conclusions are fair, and which are illogical.
I am only going to discuss wear metals, as those are results and not inputs. We could apply these same principles of analysis to elemental inputs (calcium, magnesium, phosphorus, boron, etc) or physical properties (flash point, viscosity, etc) but those are purposely manipulated by the lube makers. In fact, the very nature of macro analysis methodology takes into account the vast variability of these inputs. So, we’ll focus on the wear metals, because they are the “tellers of tales”; they let us know how much wear has occurred, and can allow us to have reasonable understanding of how much more might occur, should an OCI be extended. In short, manipulated physical fluid properties and additive-package criteria are inputs, whereas wear-data results are outputs.
Other things to note: my discussion and analysis here is predicated upon the presumption that lubricants represented in the data are not vastly or grotesquely different from the OEM specified parameters. While it is reasonable to expect that someone will utilize a different lube grade other than what is specified, the data presented does not likely represent wholly inappropriate lube selections such as using hypoid gear oil in the engine crankcase, or very old “CD” rated oil in a modern diesel, etc. Succinctly put, most UOAs represent lubes that are at (or near) proper fluid selection for the applications.
A quick key to show the terms used:
- Avg = average numerical magnitude
- HDEO = heavy duty engine oil (commonly accepted to be diesel rated lubricant)
- MAX = largest magnitude seen in the data stream for that element
- Normal = within acceptable or desirable statistical standard deviations
- OCI = oil change interval
- OLM = oil life monitor
- Per 1k mile = ppm count averaged over a 1,000 mile exposure duration
- PPM = parts per million
- Std Dev = standard deviation; a sigma node; (Greek letter symbol “∑”)
- UL = upper limit, using 3 times the sigma
- UOA = used oil analysis
The wear elements are listed as seen in the periodic table of elements:
- Al = aluminum
- Cr = chromium
- Cu = copper
- Fe = iron
- Pb = lead
(note: all wear metal data is reported in ppm)
I’m going to lay out one example of a micro-analysis engine UOA series. OCIs were done religiously (the goal was 5000 miles +/- 100 miles). This UOA series is the epitome of consistent inputs; the owner was very dedicated to the protocol of the testing parameters. This vehicle saw very common and typical use in its lifecycle and environment including weather, driving cycles, etc. This type series is, frankly, very rare. Very few people drive so far annually, and have the dedication and desire to stay the course, spend the money, and accept the monotony of such limited confines.
Ford 3.0L OHV gasoline V-6
One of Ford’s more prolific engines; it has been in production a very long time with minimal updates other than emissions related components.
(Notes: this is the “Vulcan” engine. UOAs were by a local company and not Blackstone. We must acknowledge there were moves in API service specifications during this series from SJ to SN.)
|Oil Miles||Vehicle Miles||Al||Cr||Fe||Cu||Pb|
|Oil Miles||Veh. Miles||Al||Cr||Fe||Cu||Pb|
This is a good example of micro-analysis. The data created is consistent and can be used to make a solid lube decision for the stated operating conditions; there are no abnormalities revealed. The standard deviations are all well less than the means; this is as expected and desired in a controlled micro-data set.
This vehicle went from a steady diet of one popular brand-name synthetic oil with a premium filter to quality conventional oil using a typical shelf brand name filter. Can you find the data range shift indicating synthetics and high-end filtration were “better” in this application? Are you able to discover the mileage point where the change occurred and resulted in statistically significant wear-trend shifts? What the data shows is that the average wear metals shifted less than a point after that change. I’ll give you a hint: after the change, Al and Cr were both up while Fe, Cu and Pb were all down. However, all shifts were well within one standard deviation for each distinct metal. In short, the normal variability of lifecycle usage greatly overshadows the very small shift in wear. And, when two metals go slightly up and three come down, it could fairly be called a moot change; it was statistically insignificant in all criteria.
What we can surmise is that for this maintenance plan and operational pattern, there was no tangible benefit to using the high-end products. The high-end products did not offer a tangible advantage; conversely, the typical quality base-line products presented no additional risk of accelerated wear. We cannot conclude that this result would be true of all potential circumstances; only that it is true when applied to a 5k mile OCI with the given operating conditions. Significantly longer OCIs likely may have shown a statistical difference between the two lube/filter choices, but that was not part of the test protocol.
I’m going to lay out several examples of macro-analysis to illustrate how mass-market data can be used. Here we can see how large groups combine to make bulk data useable. I’ll do a detailed analysis on the first two examples, and then present summarizations for the following examples. The key concept to glean is how macro-analysis, when the data is properly managed, defines “normal” results. These UOA series were all from Blackstone.
Ford 4.6L “modular” gasoline V-8
These samples range over 5 years of UOAs, from August 2007 to August 2012. There are almost 550 UOAs here; plenty of data to find what is “normal” and not. The first data box exhibits all samples, where subsequent data boxes exhibit individual years by process date.
|4.6L Ford||Time Oil||Time on Equipment||Al||Cr||Fe||Cu||Pb||Pb’|
|5 Years & 548 Samples||5516||94078||Average||3.3||0.9||14.6||4.8||2.8||1.2|
|Per 1k Miles||.6||0.2||2.6||0.9||0.5||0.2|
|2007: 38 Samples||4492||79906||Average||2.7||0.6||10.2||4.9||0.4||0.4|
|Per 1k Miles||0.6||0.1||2.3||1.1||0.1||0.1|
|2008: 100 Samples||4687||89521||Average||2.9||0.8||14.0||4.3||9.5||1.5|
|Per 1k Miles||0.6||0.2||3.0||0.9||2.0||0.3|
|2009: 94 Samples||4931||87685||Average||2.8||0.7||12.7||4.1||1.3||1.3|
|Per 1k Miles||0.6||0.1||2.6||0.8||0.2||0.3|
|2010: 123 Samples||5320||96641||Average||3.4||0.9||14.6||5.4||1.5||1.1|
|Per 1k Miles||0.6||0.2||2.7||1.0||0.3||0.2|
|2011: 125 Samples||5720||96805||Average||3.9||0.9||15.9||5.0||1.5||1.5|
|Per 1k Miles||0.7||0.2||2.8||0.9||0.3||0.3|
|2012: 68 Samples||8157||109594||Average||3.7||1.0||18.1||5.1||1.6||0.6|
|Per 1k Miles||0.5||0.1||2.2||0.6||0.2||0.1|
Note that there are two columns for Pb; one is the raw data and the other is the same data stream with just three data points taken out. Why take out data? It is because those three points were grossly skewing the data stream development. Most of the Pb counts in all other samples were well below 35 ppm, but three samples had magnitude of 68 ppm, 204 ppm and 602 ppm. When I reviewed the individual UOA details, those three suspect reports had no indication of reasonable explanation as to why the Pb was so very high; the OCI was not long, the other wear metals were not skewed high, etc.
While I can suspect that perhaps a bearing was damaged, or leaded fuel (or leaded fuel supplement) was used, I cannot know the root cause for sure. Regardless, those three data points were affecting the “normalcy” of data. So I created a “lead prime” (Pb’) column with those three data points taken out. Since there are 548 total sample UOAs, and only three were removed (representing only one-half of one percent total population), there certainly is plenty of data left to use. And look how greatly those three data points were skewing the results:
|Avg Pb||Std Dev|
|Full data set:||2.8||27.4|
|Revised data set:||1.2||2.8|
See how the average Pb dropped more than 57%, and the standard deviation decreased by nearly a factor of ten! Only 3 samples of 548 were responsible for such an overt act of skewing this data. This is where math and common sense come together to make a reasonable conclusion that some intervention of the data is warranted and desirable. By removing only 0.5% of the Pb data population, we shifted the range very significantly. This indicates that those three samples were not “normal”, and the remaining 99.5% are so. In macro data, when the standard deviation is some large magnitude of multiple larger than the mean, there is cause to believe there are abnormalities imbedded in the data stream. When the deviation is smaller (perhaps around 150% larger or less) it indicates that the mass-market population is representing the variability of inputs as desired, and not being affected by spoilers. There is no hard and fast rule; training and experience and knowledge of the data subject matter help define and delineate when and where to intervene.
To continue, I broke out the years (defined by UOA processing date) to discover if there were any significant changes over time; clearly there are not. For example, look at Fe. The average Fe wear rate, viewed on a “ppm / 1k mile” basis, is reasonably consistent, and varies by less than 1 ppm over 5 years of data.
But, let’s now look at the topic of Fe wear in detail; a great storyline exists here. How is it affected by UOA duration in mass population total? Run the oil longer, the Fe goes up, and very predictably. In 2007, the overall population average UOA sample was taken at 4.5k miles, and the Fe average was 10.2 ppm. Five years later, the average population UOA sample was taken at 8.1k miles, and the Fe average was at 18.1 ppm. An 80% increase in mileage duration was mirrored in a resultant 80% increase in Fe. That is a very predictable response curve; the wear is consistent. But the data can be analyzed even further and deeper.
Here is where Fe wear gets really interesting. What happens if we break down the data from mass population, and get into directed duration sub-groups? I pulled out samples within the UOAs and found the average Fe wear was thus:
|UOA avg. Duration||3K||5K||7K||10K|
|Fe ppm / 1k miles||3.2||2.5||2.5||2.3|
It is in fact true to say that when you change oil frequently the UOA will exhibit a higher Fe wear metal count. There are two reasonable explanations to this phenomenon of elevated wear metals shortly after an OCI; residual oil and tribo-chemical interaction. When you change oil, no matter how much you “drip-drip-drip” the oil into the catch basin, there is always a moderate amount left in the engine. Ryan Stark of Blackstone estimates up to 20% of the old oil remains, more or less, depending upon the unique traits of each piece of equipment. So, when you begin your new OCI, you really are not starting at zero ppm. Additionally, there is indication that wear is elevated after each OCI because of chemical reactions of fresh additive packages. This claim is supported via an SAE study done by Ford and Conoco (ref #1) that surmised this very phenomenon, and additionally refers to a former study of the same conclusion predating it.
So, the reality is that we are seeing a combination of two phenomenon; one being the residual oil contribution and the other the chemical reactions. The elevated readings towards the beginning of an OCI are typically (for most engines) less than one point, representing tenths of change. I cannot deduce from this macro-data set what portion of wear is due to residual oil and what portion is due to chemical action, but to be honest it really does not matter, because it’s impossible to separate the two phenomena in real life, and they act together to produce a single result. Wear metals are factually elevated after an OCI due to chemistry and artificially inflated by residual metals; we cannot elude this truth.
While the wear rate is not greatly escalated at the front end of the OCI, it certainly is not relieved (lessened) by the frequent OCI, either. In short, changing your oil early does not reduce the wear rates, presuming you did not allow the sump load to become compromised in the previous load. It’s a subtle but very important distinction. When you have reasonably healthy oil, the wear rate slope is generally negatively flat (muted is a better term, as there is always some variance). Only after the oil becomes compromised (overwhelmed) in some manner would you see a statistical shift in wear rates. Hence, higher wear at the front of an OCI is plausible, but the claim of lesser wear with fresh oil is most certainly false. The wear rate for Fe is reasonably constant, if all other things are in decent operational shape. Those who change oil frequently at 3k miles are not helping their engine. Those who leave it in for longer periods are not hurting the engine. At this point, I will note an acknowledgment to the concerns outside of wear metals. Oxidation, soot, coolant, fuel, etc can cause a need to OCI. But, those things are also reasonably tracked in a UOA. So, if your fluid health is good, and your wear metals are on track, there is no reason to OCI until something changes in a statistically significant manner.
As for the “UL” listing, that is the other part of the story. The “UL” represents what would be deemed as the 3rd ∑ upper limit of normal distribution. Looking at the typical variance of a wear metal, we can establish a standard multi-sigma node series limit that defines the “normalcy” for the broad market response. Any time your results are within the 3rd Sigma, you can consider them “normal” (after abnormalities are negated). This allows us to include all manner of variables such as brand and grade of oil, use factors, environmental factors, service factors, etc. If your results are near one sigma or less, you are well within a normal response set.
These samples represent the group of the Ford 4.6L engine UOAs Blackstone received during that five-year time frame. There are some repeat customers that submit samples from the same vehicle, but those are no more or less valid than singular UOAs from separate sources. The samples represent not just grandma’s grocery-getter, but also many Triton truck engines, and some Police Interceptor engines, taxi service engines, high-performance Mustang engines, high-mileage traveling salesman engines, trailer towing engines, etc. There is a large, vast world of inputs to this 4.6L engine data; people who run thin 5w-20 and those who run thick 5w-40. People who use conventional lubes and those who use synthetics are included. Those who live in the heat of the desert southwest and those in the cold of Canada are all in here. Those who top-off sumps and those who do not are included. Why mention all of this? When the inputs are so greatly varied the data already includes the diversity of mass population contribution. Or, more simply put, the wide range of inputs is already accounted for in the “normal” variance of the data results. This is one benefit of macro-analysis. Only if we saw a large variation in wear rates between sub-groups or massive ∑ magnitudes could we conclude that inputs had a large affect on the results. With the 4.6L engine, this simply isn’t the case; wear is generally unaffected by operational conditions and OCI. As much as practical, I took that mass population data and broke it into directed sub-groups for vehicle mileage, lube exposure, year of service, projected severity factor, etc. I purposely tried to find statistically significant delineation where some factor might distinguish itself as unique; I could not find one. Hence, the conclusion to come to is that lube brand and grade, filtration selection, as well as various service factors and OCI durations, really don’t matter greatly in this example; the 4.6L engine really does not care what you use or how you drive it.
GM-Isuzu 6.6L Duramax diesel V-8
The Duramax is known as one of the better-wearing light-duty diesel engines in the marketplace, and for very good reason. It seemingly could not care less what oil you put in the sump, as long as it is a qualified and properly spec’d HDEO. GM does not publish metal condemnation limits for this engine that I am aware of. Here is how the data plays out:
|Oil Miles||Veh. Miles||Al||Cr||Fe||Cu||Cu Prime||Pb|
|Ppm / 1k miles||0.4||0.0||2.2||2.2||0.5||0.3|
Of these 527 total samples, all were from analysis in 2012. The samples also represented some fairly high-mileage vehicles. There were 179 samples of the 527 that were over 100k miles in vehicle use; many were vehicles with over 250k miles. Because of its lineage and typical light-duty truck market use, these engines are in service for a long time.
Again, we can see the need to manipulate data to remove abnormalities. There were 41 samples with ultra-high Cu counts; many of them on a multiple magnitude of 100 or more. There were many Cu readings over 200pm and 300ppm, and one as high as 484ppm. So, I again created a separate column (Cu’ = copper prime) to root out the high-flyers. While some would decry the removal of data, you can clearly see how these spikes can adversely affect what is deemed “normal”. And, while 41 samples seem like a large amount of data to remove, they represent only 7.7% of the total population, and yet their removal resulted in almost a 79% drop in the “average” Cu magnitude. It was the right thing to do. It is important to note that this condition of spiked Cu has speculative causation; I’ll not get into that here. It is also important to acknowledge that very often, these Cu spikes self-correct after a few OCI flushes. After removing the high Cu samples, look at how the Cu average dropped from 16.0 ppm to 3.4 ppm (nearly an 80% downward shift) and the standard deviation for Cu reduced by more than a factor of 10x!
You can see why this is reputed to be a very good engine; it wears very well. Interestingly, the standard deviation for UOA duration in these reports is 4k miles and the average is 7k miles. If you run out an OCI to 11k miles, you’re “normal” within one standard deviation. That is where the OLM often takes the owners in their maintenance journey. The OLM in this vehicle is a “smart” OLM that monitors engine operational conditions, rather than being a “dumb” mile counter. It is not uncommon to see the OLM indicate an OCI between 9-11k miles on this engine in many cases. Clearly, the OLM is reasonably accurate and trustworthy. Essentially, folks tend to OCI this engine too frequently, but enough of them push out the OCI to make the first ∑ right around where the OLM typically indicates an OCI is due.
And again, I wanted to know how oil lifecycle affected wear rates, so I looked at three sub-groups; 3.5k miles, 7.5k miles, 11.5k miles. And, again, higher Fe wear rates are revealed towards the front of an OCI …
|UOA average duration:||3.5K||7.5K||11.5K|
|Fe ppm / 1k Miles||3.0||2.3||2.0|
In no way does that mean that an engine is grossly being harmed, but it directly contradicts the mantra that “more is better” (“more” indicating OCI frequency and “better” being less wear). What we are seeing is the reiteration of that “sweet spot” (similar to the Ford 4.6L example). Somewhere, the Fe wear rate will begin an ascent and probably become parabolic, but that point is way further down the road than most people think. However, because the samples become sparse at much longer UOA durations, there is insufficient data to determine where the Fe wear rate might begin to escalate. The wear rate is still coming down even approaching 12k miles, although at that small magnitude the variance is in play. What is clear is this; you can change your oil early, but it will not reduce your wear rate. You can put off your OCI for a long time (at least to 12k miles) and it still will not really affect your wear rate.
Next, allow me to illustrate how macro-analysis can be used to determine what is “normal” for separate entities. Consider the following …
Two Duramax equipped 2006 trucks, used in very similar circumstances for the same UOA duration. Both trucks were basically stock, both pulled heavy RVs into the mountains for roughly 6.5k miles, both see heat and cold patterns that are similar to each other and represent full seasonal swings. Essentially they are about as similar as one could expect for two vehicles that are not operated by the same person. There is one significant difference; one vehicle was run on premium synthetic 15w-40 oil HDEO and utilized bypass filtration, the other truck used conventional 10w-30 HDEO with a normal filter. Here are the exact results in regard to wear, along with the Universal Average and Standard Deviation from the data above:
|Truck A||2||1||15||4||1||Synthetic oil and bypass (ref 2)|
|Truck B||2||0||14||3||5||Conventional oil and filter (ref 3)|
|UL (3 sigma)||6.4||1.8||47.9||16.2||9.6|
Can we say that either truck did “better” than the other? No – not without true micro-analysis could we make such determination. But we can say that neither truck did better than the other, because they both were easily within 3-sigma deviation of “normal”. Iron is the greatest indicator of cumulative wear, and these samples were right at “average” levels, despite the towing. At face value, one might claim the synthetic did “better” because the Pb was lower in truck A and higher in truck “B”, but they are both well within the typical variance. Ironically, the Cr, Fe and Cu were actually higher in truck A with synthetic and bypass, but again, they were well within normal variation. It is completely expected to see wear metal counts “bounce” up and down from UOA to UOA. It is “normal” for metals to vary in mass populations and it is “normal” for metals to vary in individual units. But when you can see a single sample well within mass-population “normalcy”, you can deduce that it’s performing no better or worse than any other unit using any other fluid/filter combination.
What little variation occurred was the expected normal variation due any engine in this family. Two vastly different inputs (lubes and filters) did not result in any significant difference, under nearly identical operational conditions at the same duration exposure.
And so, we can fairly say this of these two examples: in these very similar operational circumstances and conditional limitations, there was no tangible benefit whatsoever to using the high-end products. The high-end products did not distinguish themselves by manifesting into statistically significant results.
Toyota 3.4L gasoline V-6
Here is some good data on the famous engine that’s been around a very long time. This data came from ten years and nearly 400 samples; there were no standout years to mention as they were all reasonably similar.
Here is the data:
|Oil Miles||Vehicle Miles||Al||Cr||Fe||Cu||Pb|
|16000||310254||Ppm / 1K||0.4||0.0||1.1||0.8||0.5|
The numbers speak to what a great engine this is.
Again, when broken into sub-groups based upon exposure duration:
|UOA Average duration||2.0K||3.5K||7.0K||10K|
|Fe Ppm / 1K miles||2.6||1.3||1.1||.9|
The “sweet spot” occurs a bit earlier on this engine than the other two engine examples, but it does indeed exist. Yet again, the residual oil and chemical reaction is affecting the wear rate up front. Once it settles, the “sweet spot” does reach out further than most would realize. I cannot state where the wear would begin to escalate; there are too few samples to get good analysis data resolution. At 10k miles, it’s still experiencing extremely low wear rates. To say this is a fantastic-wearing engine would be a gross understatement.
GM 5.7L OHV gasoline V-8
The good ol’ Chevy 350 reviewed here. All samples analyzed in 2012; more than 500 of them.
|Oil Miles||Vehicle Miles||Al||Cr||Fe||Fe’||Cu||Cu’||Pb||Pb’|
|Ppm / 1K||1.2||0.3||6.8||4.8||2.6||1.4||2.7||1.5|
UOA at 3k miles ~ wear rate Fe at 6.8 ppm / 1k miles
UOA at 5k miles ~ wear rate Fe at 4.9 ppm / 1k miles
UOA at 7k miles or greater ~ (insufficient data)
To be blunt, this engine really does not wear as well as some other engines. Of the 513 total samples, there were 216 of them that had wear-numbers high enough to skew data with high Fe, Cu and/or Pb. Considering the low average UOA of 3.3k miles and std dev UOA of 2.5k miles, these engines do not exhibit impressive wear performance. Of the 216 engines, only 12 of them had multiple wear-metal issues (as defined as two or three cautionary metal counts in the same UOA). The rest were unique UOAs showing a single cautionary reading. When 42% of the samples have high wear, it’s hard to say these are abnormalities; they are in fact, such a large portion of the population that we cannot discard them. I processed the info to show you how the numbers skew the data, but it is not fair to remove 42% of a population; they belong in there. This engine simply does not wear well. Even after creating a “prime” revision column for Fe, Cu and Pb, see how high the average metals are per 1k miles. Fe, in particular, is nearly 7 ppm / 1k miles! Ironically, however, that does not keep this engine from running strong; it just wears heavily while doing so. Again, we must return to the concept of “normal”; the data is telling us that it is expected of this engine family to shed metals at higher wear rates and with great variation. Lube condemnation points definitely will come sooner with this engine. It is a very well respected engine that has great power potential and a strong following; that cannot be denied. But don’t let mythology belie the facts; these engines wear heavily. One might be able to point to simple age and design factors. The GM-350 is a very old engine design; if someone says “they don’t make ‘em like they used to …” you might want to stop and consider what that really means. We can acknowledge that residual oil may be contributing to the wear rate at 3k miles, but it’s on a much larger scale than the other examples in this article for sure, and the wear stays higher throughout the data stream, compared to other engines. In short, higher wear will leave more residual concentration, but when combined with yet more escalated wear, it just does not go away easily. It’s a vicious circle of self-fulfilling prophesy.
Detroit Diesel 12.7L Series 60 diesel I-6
A stalwart of the on-highway heavy trucking and motor-coach industries, this Series 60 engine has been around since the late 1980s, and served in many various applications with various displacements. It is well respected for a very good reason; it lasts a long time with good power production. Here is the data from more than 511 UOAs from 2009 to 2012, a three year sampling:
|Oil Miles||Vehicle Miles||Al||Cr||Fe||Cu||Pb|
|Ppm / 1K||0.2||0.1||1.7||0.1||0.3|
Instead of speaking to wear rates, I’m going to focus on condemnation limits. Detroit Diesel does indeed publish condemnation levels for the wear metal content of the UOAs (ref #4). They have no limit for Al and Cr, but they do limit Fe at 150 ppm, Cu at 30 ppm and Pb at 30 ppm. It is interesting to note that of 511 total samples, none were over the 150 ppm Fe limit. There were two samples of Cu over 30 ppm; one at 33 ppm and one at 128 ppm. There were four samples of Pb over 30 ppm. Two were only 31ppm but technically over the limit. One Pb was at 43 ppm and one at 173 ppm. There were only 6 unique samples of 511 that were over the established condemnation limits, and yet look how low the averages and rates are. Even at 16k mile OCIs, people change the oil in this engine far too often.
Additionally, one cannot exclude the contribution of sump capacity and how it affects the metal concentrations. Part of the reason the Series 60 does very well is because of a good lube system design with a large crankcase pan. That holds down the contamination per unit of measurement. That in turn allows for longer OCI durations; a goal of the over-the-road applications in order to maximize drive-time and reduce routine maintenance down-time.
We can use mathematical resolution to view any type of data, to find normal performance, and root out statistical anomalies.
What we cannot conclude
Macro-analysis does not allow for any conclusion to be drawn as to what product(s) might be “better” or “worse” than any other in the grouping, as we can do with micro-analysis. It is so very common to see this happen, and yet it is so very wrong to do so. When any one sample is within one or two standard deviations of average, thereby defining itself as “normal”, we can only conclude that the events and products that lead to that unique data stream were also “normal”. Any variance is not due to one particular product or condition, but the natural variation of macro-inputs. Therefore, we cannot say that brand X was “better” than brand Y or brand Z because typical variation is in play.
What we can conclude
Only with micro-analysis, using long, well-detailed controlled studies, can we make specific determinations as to what might be “better” or “best” for an application.
However, using macro-analysis, we can state that if two separate samples are both within standard deviation, the separate conditions and products did not manifest into uniquely different results. When viewed within an engine family, if engine A is compared and contrasted to engine B, and those two engines used different lubes but resulted in similar wear metal counts and rates, then we can conclude that neither oil was “better” than the other. And when the results are within one standard deviation, the proof is conclusive that neither product had an advantage over the other. Essentially under these conditions, we cannot say that either choice is “better”, but we can say neither is “better”.
Knowing one’s Limitations
Standard deviation data is large or small, for all kinds of different equipment, depending upon your own definition of the words “large” and “small”. In some manner for frame of reference, when the standard deviation is more than 50% of the average magnitude, many consider this to be “large”; I would not disagree. But that does not preclude it from being “normal”, as defined by this concept: happening with great regularity and having no adverse successive effects. Ryan Stark of Blackstone will tell us that the greatest variable that affects wear is usage factor; the data here may well support that conclusion in some circumstances. But what is also clear, at least in all these examples, is that the variation of that usage factor is still “normal” and the standard deviations are large enough that most of us are “normal” in our use of equipment. And OCI durations (too short or long) can also affect wear rates as greatly as usage factors.
Unfortunately, you’ll never know how many abnormalities are present, nor if they have been pre-screened for you, because most UOA services do not perform this extra mathematical filtering. What you can take solace in is the fact that if your UOAs is near, or less than, “universal average” you’re probably in very good shape; you are, in essence, “normal”.
What is applicable to most of us…
I’ll throw out some generalizations here that are a result of the data I’ve collected from many thousands of UOAs from all kinds of equipment, from many different sources:
1) The large diversity of use, environment, lube grade, etc is already accounted for in macro-analysis data sets
2) The dedication needed for correct micro-analysis methodology is rare and goes unheeded by most people
3) There is always a “best” combination of equipment, lube and filter, but it goes undiscovered by most people because they do not apply the correct methodology
4) That “best” combination is only applicable to unique individual equipment and given set of limited operational circumstances
5) There is a “sweet spot” where the equipment and lube perform better together
6) That start of that “sweet spot” is unique to each piece of equipment, and lasts much longer than many people would suspect
7) Wear rates will generally shrink as the oil is used, contrary to popular belief
8) Changing oil frequently does not reduce wear in healthy engines with healthy oil
9) Changing oil too soon is a waste of product, regardless of what brand/grade/base stock of lube you choose to utilize
10) Condemnation of the lubricant should be based upon a multitude of criteria, and not with any one criteria taken out of context
11) Condemnation is much further out than many would suspect; only if you were to over-run the “sweet spot” greatly would wear begin to escalate
12) Condemnation levels are generally misunderstood, if acknowledged at all
13) To realize the claimed benefit of any premium product, one must operate in a conditional set of circumstances that manifests into statistically distinguishable differences; the benefit must be tangible, otherwise the benefit does not exist
UOAs are great tools, but you must know how to properly manipulate the data and interpret the results. You must know not just the averages, but also if there are any abnormalities embedded in those averages and how large the standard deviation is. With all that in mind, you can then use the UOA as a tool in either micro or macro analysis, to see how well your equipment performs with respect to itself, and to others like it.
I hope this enables you to view your UOA data under a new light, allowing the ability to determine what is “normal” and what is “better” in proper context.
Acknowledgements and References
This article is the sole property of David E. Newton as published on “BITOG” with contribution from Ryan Stark of Blackstone. All rights apply.
1) SAE study; http://papers.sae.org/2007-01-4133/
4) CIWMB; engine oil filter study, 2008, page 11 http://www.calrecycle.ca.gov/Publications/Documents/UsedOil/2008020.pdf