Originally Posted By: dnewton3
If GM wanted a “real world” test, they could have stationed an engine outside, on a test stand, using common intake air and experiencing normal environmental fluctuation. They would follow “normal” maintenance conditions, running an OCI for perhaps 100 hours (rough equivalent to 6k miles) at reasonably varied throttle settings, using a bulk supply of decent quality oil changed in the sump per a fair maintenance plan, all as constants. They would then run several “control” studies to find out what “normal” wear rates were. After they establish those controlled wear rates, only then could then alter filtration efficiency, and study the effects over many successive FCIs, holding the engine operation, oil supply and the OCI duration as constants. THAT would be reasonably close to real world conditions. They would NOT accelerate wear by overdosing the sump so that the test would be over in 8 hours; they would take weeks upon weeks to accumulate “normal” wear in both the control and variable portions of the study. But they didn’t.
Why not? Here’s why:
“Used oil analysis from engines in the field will not typically show such a clear correlation since wear metals generated between oil changes will be at much lower concentrations.”
That one sentence tells it all, guys. That one sentence is the acknowledgement that real world conditions cannot replicate the results, because:
1) There are other contributors to wear-control
2) Typical wear is so low that filtration selection is moot because normal operational wear variation is statistically larger than the ability of filtration efficiency to affect wear rates
GM solely wanted to manipulate filtration and eliminate other wear-control contributors so they could see ONLY the affect of filtration. That was their DOE.
Exactly ... this study was not about "wear control", except for one factor, that being particle size. Total wear control and all the other factors involved is a whole other subject, study and test process. That's what the reader here needs to understand.
The purpose/focus of this study was simply and only to define the "Correlation between particle size and engine wear". How they got there (accelerated test) is irrelevant as long as it's an accurate representation of that correlation, which IMO it was.
And their conclusions are targeted only with respect to the purpose of the test. As they summarized, the more effective an oil filter is, the less engine wear there will be. The more crud you can filter out, the smaller the wear rate will be and the longer the engine should last.
Does it matter in the long run if one filter is much less efficient than another given all the other important factors that contribute to engine wear (oil type, OCI, etc)? Maybe, maybe not ... it really depends on how those other factors also play out. But this study was not about any other factor except particle size vs wear rate.