With the internet came the amazing ability to share ideas across the world in a split second. Researchers and scientists are now able to collaborate much easier and advances in technology and science have benefited greatly. However, this access of information has also allowed for an overload in many fields, strength and conditioning being just one. More information does NOT mean that all the information is true or accurate. Now that we have access to these mountains of information, sifting through the research and finding what is accurate, meaningful, and valuable is more important (and more difficult) than ever.
In order to comprehend the research, first, we must understand the data – especially when dealing with athletes. Make sure the data is completely explained. What is the population? How big is the sample size? How long was the study conducted? Research done for two weeks on a group of 8 untrained sedentary elderly cardiac patients probably should not be applied to your athletes. Statistics 101 classes teach us that 30 is the “magic number,” however the larger the sample size the better. For athletes, make sure the height, weight, gender, ethnicity, and age are known for the population. Next, we must determine if the data is experimental or observational data.
Experimental Data comes from a controlled experiment with proper design. All experimental methods should be explained. For example, is the environment controlled and are subjects randomly assigned treatments. Controlled experiments are the best for attempting to determine causation, though these types of experiments are much harder to find in these populations.
Observational Data is simply observed and collected, the researcher has no control over the data. Often researchers are not able to set up controlled experiments, and must rely on observational data. Observational data can bring great insights and associations or correlations but it is much harder to determine causation as so many variables are uncontrolled.
There are many different research methodologies that are used with different data sets. Some key points to look for in good research are: reliability, validity, and the explanation of statistical models.
Reliability is the ability to get same results with same measurements consistently. For example, if I step on and off a scale multiple times it will keep giving me the same number. Now if you and I are timing an athlete’s 40-yard sprint with a hand-held stopwatch, this becomes much more of a challenge. Reliability can be measured with a Cronbach’s Alpha or Intraclass Correlation Coefficient, both are measurements between 0 and 1. With reliability, the closer to 1 the better with anything above 0.6 usually accepted as being reliable.
Validity is showing that the method is actually measuring what is trying to be measured. For example, if I step on a scale and it says 200 pounds, I truly weigh 200 pounds. Again, this gets much trickier with data that are hard to measure. The National Football League has tried to do this for years, hoping different physical tests done at the NFL Combine can measure performance playing in the NFL. However, results have not been promising. All measurements used in research should point to both validity and reliability.
There are hundreds of different types of statistical models that can be ran on a data set. Ideally, a hypothesis is mentioned of what is trying to be shown by the research and the statistical model makes sense to support the hypothesis. If the statistical methods are not understood, a quick search can help with understanding and interpreting the results.
Here we have discussed what to look for in a research article, primarily from a research design and research methods perspective. Next week we will dive deeper into what to look for when results are explained and how to interpret these findings. What is a correlation? What does statistically significant mean? What is a P-value or a T-score, and what do they mean?