Short answer, not very. Industry standard for EPA certified labs is no more than +/- 10%. Given the cost of a UOA and throughput of oil anyalysis labs, expecting less than 10% is unrealistic. They can't afford the time or cost to get optimum results for each element, although 10% at all but the lower concentrations is achievable with just a bit of care.What is the accuracy of an oil analysis? Does anyone have any documentation from Blackstone or SPEEDiagnostix or any other oil analysis lab showing the accuracies for each element tested for, e.g. +- 1 ppm for iron, etc?
This is exactly what I am curious about. If they report 5ppm of iron, is that +-1ppm or +-5ppm, and how reproducible are the results (rhetorical)? This would be important to know in order to determine whether or not the results between one oil and another are statically significant. They seem to imply it is +-1ppm, but without documentation, it’s just an assumption.
Honestly, send the same sample off twice with a few weeks in between and see how different the results are. I'm betting up to 20%
What happens if one is taking vitamin or Iron supplement and contaminates the sample?![]()
You left out precision. Duplicate samples only give the accuracy component....so they are repeatable at 4.5ppm +/- .3 ppm but the sample is actually 10ppm. Not good. I'm sure BS and other labs utilize some sort of standard reference materials to calibrate their equipment but would be interesting to see their QA/QC data/procedures for all of their testing equipment/tests. I know I had a viscosity value come back way out of line and I had them repeat the test and it returned a more reasonable result...which was correct?What you raise in your question are actually several topics rolled into one.
There's a concept called "Gauge Repeatability and Reproducibility" (R&R). This takes into account not just the machines, but the human interactions with the machines. Though the ICP machine is automated, the samples still have to be prep'd, calibrated, validated, maintained and operated by humans. There are a lot of variables in this process.
As a generalization, based on my visit to Blackstone Labs about a decade ago, the ICP machines themselves are actually quite accurate. But the data they put out is only reported in whole numbers (5ppm; 10ppm; 2ppm; etc ...). It's unclear to me how well the refinement is in the measurements themselves and that went unanswered at the time. I don't believe any of the labs have formal public "accuracy" statements.
I always wanted to do a true DOE R&R for a lab, but BS at the time didn't have the time to allot to such a long testing protocol. I'd want to do at least 45 samples; min 3 operators and min 15 unique samples. I'd be fascinated to find out how good (or bad) each service is.
Even taking one or multiple samples whilst draining from a sump can vary. On industrial engines - the cooling loop has a sample point. The collection is taken with the engine running at full temperature and under typical load conditions …Accuracy and precision are two different things and are measured in two different ways. Accuracy (how well shots cluster on a target regardless of location on the target) vs. how close to the shots are to the bullseye (precision). For the lab, you measure precision using standard/certified reference materials. This tells you if 4.5 ppm Fe is really 4.5 ppm Fe - the question here. Accuracy is typically handled through duplicates and ideally, all fo this is done blind so the lab doesn't know which samples are standard or duplicates. So sure, send BS 6 samples of the same oil (is it really the same? How was it collected?) and see how they plot up. In this case, these duplicates are also measuring how well the sampling method represents the oil. You can also have a lab do their own dups where they would take a single sample, homogenize and split it to get duplicate results. Standards would need to be constructed using a new oil with some added anlayte (Fe) in a known concentration and tested. Typically, you have the lab in question run dozens of these standards to develop the mean/SDs to then measure against. You can also "round-robin" these standards out to other labs for comparison obviously assuming the exact same method is used. Labs using equipment like ICP will also have their own internal calibration and QA/QC standards they should be running at some determined interval. I'm sure BS can provide that info if requested like any good lab to give end-users confidence that 4.5 is....4.5.
While I make my living at this point helping clients with questions like this for mineral exploration datasets (I'm an independent consulting geologist) and QA/QC results are a big part of whether I can sign off on their Mineral Resources as a Competent Person so you as investors can be confident that the company has XYZ tons at ABC grade is in fact, as reported. For all the UOAs I've done, I've given zero consideration to QA/QC b/c it's just not that critical for this purpose in my opinion but it raises some questions.
Exactly.Even taking one or multiple samples whilst draining from a sump can vary. On industrial engines - the cooling loop has a sample point. The collection is taken with the engine running at full temperature and under typical load conditions …
You left out precision. Duplicate samples only give the accuracy component....so they are repeatable at 4.5ppm +/- .3 ppm but the sample is actually 10ppm. Not good. I'm sure BS and other labs utilize some sort of standard reference materials to calibrate their equipment but would be interesting to see their QA/QC data/procedures for all of their testing equipment/tests. I know I had a viscosity value come back way out of line and I had them repeat the test and it returned a more reasonable result...which was correct?
For the lab geeks out there ... I think this helps us have the conversation:
- Accuracy describes the stdev of the data; the lower the stdev, the more accurate the grouping; smaller is better, generally indicating low variation
- Precision is the definition we use to describe how close that grouping is to the intended target, which is typically an agreed reference standard or desired result
- Calibration is the act of adjusting the precision to move the group towards it's reference standard
That seem reasonable?
Yeah this would be the way, but as always on a volume-based business that’s not evaluated/held to a specific standard, their basic machine calibrations are all they really care about. To really get crazy on this idea, multiply your 45 samples across 5-6 testing labs and then you’d be able to look at variance not only between machines but also companies. It would also give great insight with a Student’s t-test if there were statistically significant differences, or if it was just noise.What you raise in your question are actually several topics rolled into one.
There's a concept called "Gauge Repeatability and Reproducibility" (R&R). This takes into account not just the machines, but the human interactions with the machines. Though the ICP machine is automated, the samples still have to be prep'd, calibrated, validated, maintained and operated by humans. There are a lot of variables in this process.
As a generalization, based on my visit to Blackstone Labs about a decade ago, the ICP machines themselves are actually quite accurate. But the data they put out is only reported in whole numbers (5ppm; 10ppm; 2ppm; etc ...). It's unclear to me how well the refinement is in the measurements themselves and that went unanswered at the time. I don't believe any of the labs have formal public "accuracy" statements.
I always wanted to do a true DOE R&R for a lab, but BS at the time didn't have the time to allot to such a long testing protocol. I'd want to do at least 45 samples; min 3 operators and min 15 unique samples. I'd be fascinated to find out how good (or bad) each service is.
Calibration is done to ensure instrument precision. The difference in using strandard/certified reference materials to check a lab's calibration/precision is that you the customer are controlling this by insertion of these samples of known concentration blindly to check/verify (trust but verify) their precision. Unless you have that information or info from the lab and you trust it, you can't comment on their precision and only can assume which for a commerical lab is a reasonable thing to do.I think we are discussing a similar concept using slightly different terms.
Using your word "precision" is akin to me using "calibration". (I think we agree on "accuracy"). We can combine our two different words by stating effort is put into making an instrument more "precise" by "calibrating" it. (Adjusting a rifle scope for windage and elevation, for example). Precise is the word to describe the closeness of accuracy to the intended target, and calibrating is the action taken to ensure that desired result. Unless I misunderstand you, I think we're just using different words for the same concept; that of adjusting the instrument to give a reliably predictable result in the expected area or range. "Calibration" is the act of adjusting the instrument to obtain the "precision" desired.
And yes, reference standards are typical in most labs. These standards can help understand when the "precision" is either on or off target.