Wear vs. oil-filter efficiency: SAE/Amsoil paper

Status
Not open for further replies.
Originally Posted By: dnewton3
So, can we agree that neither you nor I believe that 4.6L engine parts were ever replaced? Let's not let the he said/ she said effect drag us down. I don't have any reason to believe car engines were "rebuilt" as you stated. Nor do you. Correct?


yes...replete with your qualifiers, but yes.
 
Originally Posted By: Shannow
...

Now, back to the In field anaylsis that they did, with the actual engines, in the actual field....and your belief that the paper proves that the bench tests are proof of your theories...

full-688-9907-3_cvs_various_mileage.jpg


Would you advise the operator of car B, monitoring his Fe, TAN, and TBN at 5,000 miles to push through to 15,000 miles ?

Why, or why not ?

Why did the total wear metals (Fe), drop in the OCIs past that point, if the technique is repeatable ?

If same engine, same oil is a repeatable combination, how does it change so markedly from one run to the next ?




I would not advise anyone what to do with those UOA results, because they are not, in my belief, "normal" UOA data. They are clearly abnormally high.


However, it has just occurred to me that you and I may have a VERY SUBSTANTIALLY DIFFERENT understanding of the UOA data from the TCB study.

First and foremost, I want to say that despite our tense differences, I'm going to lay aside the attitude and just ask questions, without any intent to trap you, tease you or belittle you. I want to ask fair questions here because I think I may have an understanding of WHY you and I see things differently. AND I ALSO WANT TO ADMIT THAT I DON'T HAVE A CLEAR ANSWER TO THE QUESTIONS I AM ABOUT TO ASK YOU. I am looking for your input, so as to understand where your point of view is at, and also that it might give me an epiphany into a new view I may have overlooked. If you can accept this, read on. If not, please just ignore the rest of this post.

I'd like to state that I see the lubes as a GF-3, GF-4, and (what will become a) GF-5. That's just for clarification of how I'll refer to them. I realize it was a prototype lube, but it's close to what we'd call it today. Please don't go after me for this; I realize it's not 100% accurate. We can also refer to them at lube I, II and III (choosing to use the tables as name-sake).




Q: What is your understanding of just how the oils were used, collected and subsequently studied? Please be specific; tell me what you think happened in terms of what is and is not present at the test rig and how it got there.

I ask because I am making presumptions that you may not be making. There is room for a lot of interpretation in some aspects of the methodology of the study.

The SAE study data clearly shows two phenomenons; that of higher wear at the front end of an OCI and that of falling wear rates as the OCI extends. They noted those things repeatedly. They do not acknowledge, but I certainly call into question, the magnitude of wear in their study; it is not "normal" by any means. However, at the end of their study, their wear rates were very realistic and totally in-line with what we see in the real world. So how does an oil go from being unrealistic to realistic?

Regarding the overall Fe wear from all the samples, where does the Fe ppm come from? Are the results of the Fe a view of the wear from the engine PRIOR to the rig test, or perhaps the Fe is a combination of 4.6L wear AND 2.0L cam wear, or is it only wear from the rig? IOW, at what point did the take the UOA ppm count? They actually don't say (or at least I don't see it; maybe I overlooked it). Did they pull the oil and then test it immediately for the data in your image above? Or did they run the oil in the rig and then UOA? Or, did they do both and then subtract out the 4.6L basis from the overall numbers?

I cannot understand how they got such high Fe ppm counts. They do say they use "new" cars, and as I discussed previously, I don't know what "new" means. New off the showroom floor, or new after they did a break-in cycle? The magnitude of such ultra-high Fe numbers at the front of the short OCIs makes me wonder how they got such giant metal numbers. Are we seeing a combination of "new" engine wear AND "new" lobe/shim wear?

As they stated, each oil type was only run in one car. The Phos levels of the GF-3, 4 and (soon to be ) 5 oils varies. Are we seeing a double hit of break-in, and with the lower phos "C" type oil is just more susceptible?

And as I said, despite the unworldly high Fe in the short OCI oils, the longer oils settle down in to totally believable numbers. The wear rates become "real" and totally echo my data.

I feel unable and uncomfortable in answering your question about oil B at 5k miles. I don't believe the testing represents typical UOA data; in fact I know it to be so. They have three UOAs; I have over 600 of that same engine series. They have an engineering sample set; I have a statistically solid macro data set. I don't trust their data to be applicable to real observations in the field; my is totally "normalized". I am, to be honest, unwilling to make a comment about the question you ask because I just don't think the UOA data is "real" in terms of what normal wear represents. I am not trying to be evasive; I just don't trust the data to be "normal" and I am not totally sure what all the UOA represents in terms of contribution to the Fe.

It appears to me that the GF-4 lube has an issue basically throughout the bulk of the testing. It's way higher than type I and III in wear rates. Why? I don't know and they don't say.

I make a distinction here; one I've brought up before. As I've said before, there is a difference between the phenomenons and the realities.
The SAE study shows that short OCIs have high wear, and longer OCIs will produce lower wear rates, BUT the magnitudes are unnatural at the front end.
My data shows these same two conditions, but over a HUGE SET OF ALL KINDS OF ENGINES from all manner of applications.
The correlation between studies is undeniable; those two phenomenons clearly exist in both. But I don't claim that their study is anything other than what they claim it was to prove. It shows how older oils have an ability to develop a more mature TCB, and that product contributes to lower wear rates. I offer their results as an explanation as to what I see in the field. However their data would imply that LOTS of wear is induced by an OCI, whereas I find no such reality. Frequent OCIs are not harmful at all to the engine; they just do not offer any wear reduction as some folks believe that "new" oil is "better" for wear. It is my belief that the UOA data comes from the 4.6L engines, and the other frictional and wear data comes from the 2.0L rig in the lab. But, I cannot for the life of me understand how they got such high wear metals in the type II oil. Was it ONLY the 4.6L wear, or a combo of the wear from both contributors (engine and rig)? It would have been foolish for them to use "new" engines; what self-respecting engineer in auto and lube industry does not expect break-in metals? (remember it was Ford and also Conoco in collaboration). Were they THAT stupid to use brand new engines and test them right from mile 1? Still that does not explain why oil II (GF-4) did so much worse in terms of wear data. Point being this: the two phenomenons were present in all three engines, but engine with oil II was much higher than the other two, and all three were higher than "normal" data would ever expect to see.



Here's what I think they did with the cars (not the rigs in the lab). Do you see it the same way?
Load in new oil and a filter, drive 3k miles, do a full O/FCI and use the oil as the 3k "sample".
Load in new oil and a filter, drive 5k miles, do a full O/FCI and use the oil as the 5k "sample".
Etc ..

But, is it possible that they did this:
Load in new oil and a filter, drive 3k miles, draw a sample of oil for the rig test but continue on to the next sample without an OCI?
Same at 5k miles and continue the OCI?
etc?

I don't think they did it the second way, BUT ...
how is it that engine II with oil II managed to have a 57ppm of Fe at 3k miles, but then jump to 155ppm at 5k, then only slightly lower at 120ppm at 7.5k, but then DROP like rock to 34ppm at 10k and 32ppm at 15k?
Either the engine was having a really bad month, or they collected oil in a manner with fused some metal counts together.
Engine II took off like a rocket in Fe wear at 5k miles, but by the end settled back down to "normal" wear rates, as did the other two. Engine two had a vicious spike in wear, but I'm not able to understand why that happened at "mid test".

Do you believe that it was one OCI sampled along the way for a total of 15k miles in the test, or successive OCIs for a total of 40,500 miles?
And why do ALL of them have unnaturally high metals?


Just asking; trying to understand where you are at on this.
 
Last edited:
Originally Posted By: dnewton3
The graphs you included from the study really prove my point, I believe. Look at the differences between filters B and C. Filter B has a rating effectively twice as porous in terms of um size (7um vs 3um), and yet the efficiency differences at 10um are only 20% apart. As filters get ever closer in their effectiveness, the results they produce get ever hard to distinguish. The wear chart shows this; you even noted how close those two lines lay near each other.

This completely proves my point. Taking filters off the shelf that we commoners would buy, and then putting them into a real world wear data study, would never show us that one filter was "better" than another. Pitting a Wix/NG against a TG, or even a MC against a D+, will never give us data finite enough to see a clear winner because the variation of the filter performance will be much larger than the disparity between filters. And this is why I generally discount any of these "filter studies" most of us look at, because they cannot, do not, will not replicate anything we see in our garages.


The bus study does show that a 99% @ 20 micron filter does a much better job at keeping the oil cleaner compared to a 50% @ 20 micron filter. If Filter D would have been ran in the field, its resulting curve would be sitting way above Filter A in Figure 2.

So I'd have to disagree with your statement above shown in red. Pitting an Ultra against a WIX XP in this test would essentially be like comparing Filter B (Ultra) to Filter D (XP). I do agree that comparing a WIX/NG to a TG isn't going to show much difference, but when comparing one end of the "off the self" oil filter efficiency spectrum to the other (50% vs 99% @ 20 microns) it certainly should show a pretty distinct difference just like was shown in the bus study.
 
Originally Posted By: dnewton3
engine II with oil II managed to have a 57ppm of Fe at 3k miles, but then jump to 155ppm at 5k, then only slightly lower at 120ppm at 7.5k, but then DROP like rock to 34ppm at 10k and 32ppm at 15k?


I wasn't going to ask, but changed my mind.

What is the mechanism by which wear particles (and therefore total wear, as I understand your position in this thread) _decrease_ as miles accumulate? I can understand wear _rates_ decreasing, but I cannot understand total _wear_ reversing. I'm thinking this must be an artifact of how the testing and sampling was done, but I don't know so I'm asking. Forgive me if this is covered in this thread and I didn't see it. I find a large number of the individual posts in this thread to be extremely repetitive, so I skip over entire paragraphs and more sometimes.
 
Originally Posted By: ZeeOSix
Originally Posted By: dnewton3
The graphs you included from the study really prove my point, I believe. Look at the differences between filters B and C. Filter B has a rating effectively twice as porous in terms of um size (7um vs 3um), and yet the efficiency differences at 10um are only 20% apart. As filters get ever closer in their effectiveness, the results they produce get ever hard to distinguish. The wear chart shows this; you even noted how close those two lines lay near each other.

This completely proves my point. Taking filters off the shelf that we commoners would buy, and then putting them into a real world wear data study, would never show us that one filter was "better" than another. Pitting a Wix/NG against a TG, or even a MC against a D+, will never give us data finite enough to see a clear winner because the variation of the filter performance will be much larger than the disparity between filters. And this is why I generally discount any of these "filter studies" most of us look at, because they cannot, do not, will not replicate anything we see in our garages.


The bus study does show that a 99% @ 20 micron filter does a much better job at keeping the oil cleaner compared to a 50% @ 20 micron filter. If Filter D would have been ran in the field, its resulting curve would be sitting way above Filter A in Figure 2.

So I'd have to disagree with your statement above shown in red. Pitting an Ultra against a WIX XP in this test would essentially be like comparing Filter B (Ultra) to Filter D (XP). I do agree that comparing a WIX/NG to a TG isn't going to show much difference, but when comparing one end of the "off the self" oil filter efficiency spectrum to the other (50% vs 99% @ 20 microns) it certainly should show a pretty distinct difference just like was shown in the bus study.



They use a mean; no indication of variation. They don't have enough data for understanding that.

Until I see a decent real-world test of typical filters (say 85% up to 99%) without giant kickers, I will be skeptical. This is because I see all manner of data, and I've seen the UOAs with these choices. The data of filter choice never is distinguishable inside of normal operation.

You are not wrong to disagree, but there is no data or study yet to support your position, and it exists to support mine.
 
Last edited:
Originally Posted By: bulwnkl
Originally Posted By: dnewton3
engine II with oil II managed to have a 57ppm of Fe at 3k miles, but then jump to 155ppm at 5k, then only slightly lower at 120ppm at 7.5k, but then DROP like rock to 34ppm at 10k and 32ppm at 15k?


I wasn't going to ask, but changed my mind.

What is the mechanism by which wear particles (and therefore total wear, as I understand your position in this thread) _decrease_ as miles accumulate? I can understand wear _rates_ decreasing, but I cannot understand total _wear_ reversing. I'm thinking this must be an artifact of how the testing and sampling was done, but I don't know so I'm asking. Forgive me if this is covered in this thread and I didn't see it. I find a large number of the individual posts in this thread to be extremely repetitive, so I skip over entire paragraphs and more sometimes.



I can only presume it's different loads in the sump? That would allow the wear rate to drop AND the accumlation to drop. But I'm not clear on how they did the test; they do a poor job of defining the details in terms of methodology.

You're not wrong to ask. That is why I think there is a discrepancy in how many of us interpret the results. Shallow and I obviously disagree, but I don't know that the details are there to resolve this.

Hence, my post above to clarify the issues. We may never know for sure. We might have the ability, as a group, to reason it out. I am most certainly open to suggestions at this point. I think I know what happened, but I'm open to having an honest discussion about it.


I would PRESUME that the lube analysis was done after the loads came out the sumps; one per OCI. But, I can at least offer a bizzare option that they somehow tagged the wear from "road use" with additional wear from the rig test? That would be silly to do so, but it's not like these tests are perfect by any means. But, if they did it in a normal fashion (UOA of the oil load after the road test), then why are the wear metals so silly high? Were the engines brand new? That would explain high metals from break-in, but then what moronic engineer thought it was a great idea to test wear and friction right off the assembly line???

I do believe they proved what they set out to show; friction and wear drop with the maturation of the TCB. But the conditions are suspect to some degree. This is why I state the SAE test offers an explanation for the real world data I see, but it does not PROVE the condition beyond doubt. Their data would leave us to think that OCIs are nearly dangerous; the wear is incredibly high from OCI onset. Their lab tests clearly show the devolving (they use the word "strip") of the TCB as the fresh add-pack attacks the mature TCB, and then it has to rebuild itself as the OCI goes on. But OCIs are not harmful by any means (well ... maybe to your wallet if you over do it ... but that' a different conversation). My data does show that, without doubt, there is a slight escalation of wear rates at the front end; likely a combo of the alteration of the TCB and residual lube metals. We cannot delineate how much of which is there; we just have to accept that for what it is. But, for sure, without any doubt, frequent OCIs do NOT reduce wear; no data ever supports that conclusion.


I'd love to have a group of us BITOGers do a full-scale, real-world filter experiment. But I tried to do that with EcoBoost engines, and of all the interest I got, there was no one that was willing to follow the strict protocol that a well-managed trial would demand. A filter study would be similar; too few would want to follow the study methodology to the strict manner which would lend credibility to the results, including a control group, an experimental group, finite parameters, etc. And so we suffer with these types of "close but not quite right" tests in the SAE.
 
Originally Posted By: Shannow
Originally Posted By: dnewton3
You're not wrong to ask. That is why I think there is a discrepancy in how many of us interpret the results. Shallow and I obviously disagree, but I don't know that the details are there to resolve this.


It's been used before, you're not the first.


Sorry Shannow. Get in a big hurry when I type at times. Wasn't meant as a slight or derogatory; I owe you an apology, but it was unintended.
 
The particle count data from my shared-sump _motorcycle_ looks considerably better (cleaner) after running 5k with a Fram Ultra than those Frantz test results at all but the very smallest particle size. I like the idea of bypass filtration as much as the next guy, but the Frantz data shows me nothing.
 
Originally Posted By: bulwnkl
The particle count data from my shared-sump _motorcycle_ looks considerably better (cleaner) after running 5k with a Fram Ultra than those Frantz test results at all but the very smallest particle size. I like the idea of bypass filtration as much as the next guy, but the Frantz data shows me nothing.



Where is your report with particle counts? You are comparing a motorcycle gas engine to a large diesel with high mileage? They took contaminated oil and the Frantz cleaned it, did you? Here is their report. No full flow filter equals the Frantz type. In 200 miles the oil compared to new oil.

http://frantzfilters.com/wp-content/uploads/2015/12/FrantzResults.pdf
 
I have some comments about those UOAs.

What is going on here, in that you've got 186ppm of Fe in only 4700 miles??? That's crazy high.
In the second UOA, the Fe drops to 128ppm, but that's still way high.
Also, the silica is 30+ in both samples.
The Al, Pb and Cu don't change much though.
And the soot is high; .6 insolubles at 4.7k miles is also way high. It's not like Rotella is going to oxidize that badly that quickly. Blackstone's "insoluble" count is a combo of soot and ox; it's a visual reference. That has to be mostly soot at only 4.7k miles; the oil would not oxidize so quickly.
Potassium and Sodium are high.
What is this? It's the typical 6.0L oil-cooler manifesting into a massive EGR or head gasket breach ...
The oil is "shagged", to coin a phrase.
This isn't a "normal" sample from a "normal" engine. The wear magnitude would be past condemnation and the rates are way high. The silica and soot are way high. And it's loaded with coolant.

I suspect that the Frantz filter is not so much reducing Fe wear, as eliminating the evidence of the wear. Given how well the Fratz is showing that the PCs can be affected down to 2um, and we know ICP is sensitive from 5um on down, I believe you're not only seeing the oil cleaned up, but the Fe wear data is being whitewashed out too. Now, that is unavoidable for the system; cannot be stopped. The Frantz has no ability to decide what to remove and what to leave behind, in that regard.

The reason the Al, Cu and Pb are not changing much is probably because they are all smaller than 2um, maybe even sub-micron in size? So the Frantz cannot remove them from the stream well, if at all. But the Fe it probably 2um and larger, so it's being affected. The Al went down 1ppm, but the Pb and Cu still went up.

The Frantz is doing a great job of removing particulate; some of that is even evidence of a continuing problem!
Show me a Frantz in a "normal" Powerstroke; one that isn't experiencing a massive wear problem.
Then we can talk about typical experiences.
 
Last edited:
Originally Posted By: goodtimes
Where is your report with particle counts? You are comparing a motorcycle gas engine to a large diesel with high mileage? They took contaminated oil and the Frantz cleaned it, did you? Here is their report. No full flow filter equals the Frantz type. In 200 miles the oil compared to new oil.


It's right here on this board.
https://bobistheoilguy.com/forums/ubbthreads.php/topics/4107645/Honda_NC700X_Red_Line_10W30

Top post has 5000-mile counts, 3rd post has the new-oil counts. That's a shared-sump m/c, so that means engine wear, transmission wear, and clutch particulate. The Fram Ultra has that oil much cleaner than new, and much cleaner than the Frantz marketing material you posted.

EDIT: Okay, I didn't look at the marketing fluff page enough to see that the Ford engine is failing, but yes, it looks like it's probably starting to. Doesn't really matter, from my point of view.
 
Last edited:
Originally Posted By: dnewton3
Originally Posted By: ZeeOSix
Originally Posted By: dnewton3
The graphs you included from the study really prove my point, I believe. Look at the differences between filters B and C. Filter B has a rating effectively twice as porous in terms of um size (7um vs 3um), and yet the efficiency differences at 10um are only 20% apart. As filters get ever closer in their effectiveness, the results they produce get ever hard to distinguish. The wear chart shows this; you even noted how close those two lines lay near each other.

This completely proves my point. Taking filters off the shelf that we commoners would buy, and then putting them into a real world wear data study, would never show us that one filter was "better" than another. Pitting a Wix/NG against a TG, or even a MC against a D+, will never give us data finite enough to see a clear winner because the variation of the filter performance will be much larger than the disparity between filters. And this is why I generally discount any of these "filter studies" most of us look at, because they cannot, do not, will not replicate anything we see in our garages.

The bus study does show that a 99% @ 20 micron filter does a much better job at keeping the oil cleaner compared to a 50% @ 20 micron filter. If Filter D would have been ran in the field, its resulting curve would be sitting way above Filter A in Figure 2.

So I'd have to disagree with your statement above shown in red. Pitting an Ultra against a WIX XP in this test would essentially be like comparing Filter B (Ultra) to Filter D (XP). I do agree that comparing a WIX/NG to a TG isn't going to show much difference, but when comparing one end of the "off the self" oil filter efficiency spectrum to the other (50% vs 99% @ 20 microns) it certainly should show a pretty distinct difference just like was shown in the bus study.

They use a mean; no indication of variation. They don't have enough data for understanding that.

Until I see a decent real-world test of typical filters (say 85% up to 99%) without giant kickers, I will be skeptical. This is because I see all manner of data, and I've seen the UOAs with these choices. The data of filter choice never is distinguishable inside of normal operation.

You are not wrong to disagree, but there is no data or study yet to support your position, and it exists to support mine.


My position is that a 99% @ 20 micron filter is much better at keeping wear particles out of the oil than a 50% @ 20 micron filter ... just as the bus study data shows. That is "real-world test data" that supports my position.

You keep comparing 85% to 99% filters and say there won't be much difference between them, which I agree with. Figure 2 in the bus study reflects that.

I'm comparing 50% to 99% filters which show a pretty large difference in the level of oil cleanliness per the bus study. Don't know if anyone here has posted UOAs on the same engine with the same oil comparing (with the extra particle count data included) using a 99% @ 20 micron vs 50% @ 20 micron oil filter. That's the data you need to be collecting. If you're looking at UOA data and comparing 85 to 99% @ 20 micron filters and not even getting particle count data, then the difference will be indeterminable in the noise level when comparing filters so close in efficiency.

Question about your large collection of UOAs. Are you looking at just the ppm levels of wear metals, or are you also looking at particle count data like they did in the bus study? Figure 2 in the bus study is plotting particle counts vs particle size for each filter. You don't get particle count data from Blackstone or anyone else unless you ask and pay extra for it.
 
I would not disagree; there's a big disparity between 50% and 99% in terms of math, after all it's either twice-as-bad or half-as-good depending on your PoV.
But most filters we see (and certainly those that any BiTOGer uses) do not reflect that kind of spread.

All these rated at 20um unless otherwise stated: (data taken directly from OEM websites, product pages, etc)
Motorcraft 80%
AC-Delco 98% at 25-30um (that's how they put it ...)
Purolator basic (red can) 96%
Purolator PureOne 99%
Purolator Boss 99%
Fram XG 95%
Fram TG 99%
Fram Ult 99%
Wix/NG 95%
Bosch D+ 99.9% at 40um (a value that is not helpful in terms of comparing, but all they advertise)
Bosch Premium 99%
Mobil 1 99% at 30um (that's thirty microns, not twenty; not a typo on my part)
K&N Gold 99% at ???? (no stated size; missing info = worthless info)
Royal Purple 99% at 25um
Amsoil 98.7%


What I like about the bus study is that it shows there is a correlation between particle count and UOA Fe wear data.
What I dislike about the bus study is that it's not relevant to our world today.

The two main contributors to PC in terms of abrasive wear are silica and soot. Good air filtration and clean running combustion have all but made that bus study moot.

If you don't have a lot of wear metals in your UOA, then you don't have a lot of wear. Period.

I see no correlation in micro and macro data relevant to today's clean-running engines, regarding oil filtration from the typical choices above. Unless of course you own a mass-transit vehicle with a two-stroke soot-puking engine which was designed 45 years ago, and operate it with minimum-wage operators in inner-city severity of use.
 
Last edited:
As said above ... this is the test someone needs to do. Maybe someone already has, I don't hang out in the UOA forum much. Guys who have posted UOAs with the particle count data show very clean oil with a high efficiency filter (99% @ 20u). Would it look the same with a 50% @ 20u filter on the same engine with the same oil? I think you'd see a difference.

Originally Posted By: ZeeOSix
Don't know if anyone here has posted UOAs on the same engine with the same oil comparing (with the extra particle count data included) using a 99% @ 20 micron vs 50% @ 20 micron oil filter. That's the data you need to be collecting. If you're looking at UOA data and comparing 85 to 99% @ 20 micron filters and not even getting particle count data, then the difference will be indeterminable in the noise level when comparing filters so close in efficiency.
 
Originally Posted By: dnewton3
I'd love to have a group of us BITOGers do a full-scale, real-world filter experiment. But I tried to do that with EcoBoost engines, and of all the interest I got, there was no one that was willing to follow the strict protocol that a well-managed trial would demand. A filter study would be similar; too few would want to follow the study methodology to the strict manner which would lend credibility to the results, including a control group, an experimental group, finite parameters, etc.


Study at BITOG:
Like I said before, no one here wants to do it the right way. They all have an interest in the outcome, but none really want to be told how to operate/maintain their vehicles. And so, despite the huge amount of members here, with a great wealth of resources in terms of folks who pay for UOAs and PCs, we'll likely never get a decent study put together. And it would take a LONG time to do; not going to see results for several years.

Study at the OEMs:
Why would they care? They only have a fiscal interest in safely getting to the end of warranty. Their safety net is telling us to O/FCI often. They get low warranty risks, and we pay for it to boot! This might even be why we see OEM filters below the general aftermarket in terms of efficiency? OTOH, there's a lot of long-running Hondas, Toyotas, Fords, GMs, etc out there that have used nothing but OEM filters and oils, so the formula cannot be all that bad.
 
I still say it's possible to benefit you and what is the risk? Cheap insurance? What's the price difference and how often are you out that?

I think reason can conclude a few tests of people that already perform somewhat repeatable tests could yield data to make educated guesses on. The likelihood of them fouling their results suggests their previous tests were also unreliable before A/B/A testing on the same app/same oil/same everything but the filter difference (and of course, the season cycles would be hard to line up, among other variables).

I personally enjoy using the Fram Ultra for their price point. I don't see the purpose of throwing >$15 at the newer M1 Annual filters or a Royal Purple or an Amsoil, and so it's priced at a reasonable spot to me for the value. Technically, it's hard to prove without the right study. I think some engines/people posting data it could be measured, but to "real world test" these scenarios involves catching all of this information now before engines get retired and the data points are missed. It's hard to control real world variables without accepting them as a risk up front and seeing what the data says without jumping to conclusions.

Without that data on hand, are there any points of reason with knowledge regarding any specific/known range of particles that definitively are known to lead to "increased wear" at any point in operation to be relevant to this argument of efficiency at the smaller sizes mattering; thus the entire argument? I think overall oil cleanliness; even if it could be proved to matter, perhaps affects how quickly the oil itself breaks down and perhaps that would lead to more "wear" conditions being met more often over the course of an engines use? I mean, why is it that someone can post up a data mine for the 93 Civic that showed incredible results primarily from a bypass system and lots of highway use? Cleaner oil leads to longer usable life. Even if you can't prove wear in of itself is measurable, could the argument instead be that you could save on oil consumption/increase intervals with an overall effort to keep the oil cleaner/longer?
 
Last edited:
I can't go back and edit, but sorry for the wordiness of my post above.

Basically, a consideration factor to me is "cheap insurance" on extended interval due to an improvement in oil cleanliness over the OCI. So, regardless of whether or not higher efficiency in of itself reduces wear, I believe the cost difference, e.g. an Ultra for $9 vs say a NapaGold for $7
21.gif
, is worth it to me since this price difference is negligible after 3 or 4 months. Think = repeatable.

So, might cleaner oil influence the potential effects of extending the interval over the course of an engines life; even if you aren't increasing wear due to particulates themselves, but rather ensure less deposits in the oil's load with a measurable jump in oil cleanliness between the less efficient filters and the 99% around 25 micron or less crowd?
 
My logic is cleaner oil is better. UOA particle count data shows that higher efficiency oil filters result in cleaner oil (ie, a better ISO 4406 cleanness code). It's been mentioned that there's a correlation between cleaner oil and lower wear particle counts in UOA. Should be pretty easy to see that a 99% @ 20 micron oil filter will keep the oil much cleaner than a 50% @ 20 micron filter, and that's been evident in the discussions in this thread.
 
Status
Not open for further replies.
Back
Top