Originally Posted By: bigj_16
Unless you see the specific study, and know how to interpret the results, I would pay no attention to it.
Agreed.
I have a masters degree in analytical chemistry, and work full time as an instrumental chemist. A big part of my job is knowing how instruments work, how to design experiments with them, how to interpret the data and see if it's analytically valid, and-a big and often forgotten one-knowing the right "tool" for the job.
Graduate school teaches you how to read scientific literature, as well as how to be critical of it. It's an important tool for scientist, and basically I was taught to dig in and look for any holes in their research.
I've read literature that's held as gospel where I read it and couldn't find any significant problems with the results they achieved. I've also read an equal number where I can sit and poke holes all day in the their results. Some of the most common things I see:
1. Not using a large enough sample size or reporting results based on poor repeatability of the samples(I'm often amazed at the number of things that are published without reporting SDs-I was taught that the SD is as important a part of the measurement as the measurement itself)
2. Incorrectly interpreting the data, including drawing the conclusions from the points in problem #1, or drawing conclusions that the data in my view doesn't sufficiently support
3. Simply using the wrong tool for the job. I deal with this all the time at work, and a couple of times a week talk with someone who wants to use a certain instrument to do a certain thing. Often, the reason is "So and so from my lab used it because Dr. so and so across the hall had one." It comes back to the "When the only tool you have is a hammer..." problem, but fundamentally there is often a better way to do it for various reasons.
Unless you see the specific study, and know how to interpret the results, I would pay no attention to it.
Agreed.
I have a masters degree in analytical chemistry, and work full time as an instrumental chemist. A big part of my job is knowing how instruments work, how to design experiments with them, how to interpret the data and see if it's analytically valid, and-a big and often forgotten one-knowing the right "tool" for the job.
Graduate school teaches you how to read scientific literature, as well as how to be critical of it. It's an important tool for scientist, and basically I was taught to dig in and look for any holes in their research.
I've read literature that's held as gospel where I read it and couldn't find any significant problems with the results they achieved. I've also read an equal number where I can sit and poke holes all day in the their results. Some of the most common things I see:
1. Not using a large enough sample size or reporting results based on poor repeatability of the samples(I'm often amazed at the number of things that are published without reporting SDs-I was taught that the SD is as important a part of the measurement as the measurement itself)
2. Incorrectly interpreting the data, including drawing the conclusions from the points in problem #1, or drawing conclusions that the data in my view doesn't sufficiently support
3. Simply using the wrong tool for the job. I deal with this all the time at work, and a couple of times a week talk with someone who wants to use a certain instrument to do a certain thing. Often, the reason is "So and so from my lab used it because Dr. so and so across the hall had one." It comes back to the "When the only tool you have is a hammer..." problem, but fundamentally there is often a better way to do it for various reasons.