elitefts™ Sunday edition

The Parts of a Paper (continued)

Results and Conclusion

Understanding a researcher’s primary results and resulting conclusion is generally a face value proposition: most papers feature a sentence that reads “Protocol X led to increased performance in Y,” or something similar. Figuring out if the results mean anything to you is a little trickier. You could skim the conclusion and find something like “This method was demonstrated to be an effective contraceptive,” and then think you’ll give the technique a try. You could also read the data, learn that the particular method was effective 70 percent of the time, and decide to stick with traditional forms of baby blockage.

Getting into some terminology, a statistically significant result is one that the researcher concludes could not have been a random outcome from the experiment or survey. Being statistically significant doesn’t mean a result has practical applications, however; it’s just measurable, and that measurable amount may or may not be of interest to you.

Imagine an experiment performed on a new supplement for muscle gain; the researcher compares the supplement to a control group (that’s on a placebo) for a few months in a manner that’s strict but not unrealistic, measures the results with MRI, finds extra muscle growth in the experimental group, and determines that the results are statistically significant. Knowing that, the supplement seems to be a winner and you should consider opening your wallet up.

But let’s take it a step further and say that after five months of running the experiment, the results look like this:

 

The tiny size of the study aside, not only is the total muscle gained miniscule in either group, but the best response happened to someone in the placebo group. Are the results still a big deal? I’d say “no,” and keep my wallet closed for the time being.

In most instances, you don’t need to know much more about statistics. This is mainly due to the self-evident nature of the results for most elitefts™ readers. For example, you’ll know from the raw data how good an experiment’s relative and absolute gains are in the bench press, so won’t need to rely as much on the researcher’s methods of measurement or margins of error. I’ll go a step further and say that if you don’t have a good understanding of what was measured, the study probably isn’t of much use to you at all. In this case, the research likely micro-targeted a process, and along the way, used procedural variables with little practical relevance. Hormone studies are a great example: unless you know off-hand ideal testosterone/estrogen ratios and levels, the results of such a study won’t mean much to you. And if the item being measured and theorized upon is a simple hormonal measure (and not, for example, the muscle/strength increases of someone with these particular levels and ratios) the paper won’t have much practical impact.

Also helpful in deciphering results, is the number of charts and graphs that appear in scientific publications; a few quick glances can generally tell you all you need to know about the hard numbers, and will help clarify what you’re reading in the paper itself. With that said, graphs on logarithmic scales (which are exponential) can cause you to misread results, though these are more common in economic papers than health and fitness topics. Also keep an eye out for charts and graphs that start at numbers other than zero. One last catch is making sure that you know if the graphs or charts incorporate “normalized” data because making the profiles of separate subject groups roughly equivalent after the fact is important; a study that compares illness rates of NBA players to rates of the general public won’t mean as much if the data sets weren’t normalized for things like age, gender, and race. On the other hand, if you read a chart without knowing it was adjusted, you’ll come away with some interesting conclusions; avoid this by reading the results text.

The idea of normalization and surveys brings up one rule I have about large health studies (or epidemiological studies.) Broad epidemiological studies aren’t the greatest instruments for teasing out cause-and-effect. For example, let’s say an epidemiological study surveys people on their dairy consumption; the study sees that people who eat more dairy have a higher rate of heart disease. The study, however, doesn’t account for the fact that people rarely just consume milk or cheese by itself: they consume dairy as part of pizza, chocolate milk, burgers and sandwiches, mozzarella sticks, Chocolate Frosted Sugar Bombs, etc. In this case, the study might very well miss the bigger picture of total diet.

On the other hand, when a health-focused intervention of some sort shows a positive impact in an epidemiological study, I pay attention. Why? Because most people are awful about screwing up their lifestyles in a way that offsets their healthy activities. If, for example, substituting whole grain foods for processed grains improves the health markers of a bunch of "Average Joe's," it’s a good bet that something in the protocol is helping. It could be reduced calorie intake, quicker gut passage, extra fiber, or who knows what, but something is likely going on.

Returning to statistics, I think stats are more important when reading reviews. This is because a review writer has to come up with some measurable way of quantifying how different articles fit together. I won’t go over the terms or symbols, but different studies have to be “weighted” to reflect their perceived value. For example, a study with a large number of subjects or better controls might be weighted more heavily than a study with few subjects or loose controls. To really evaluate how useful the review is, you have to be familiar with the topical studies so that you can evaluate the weighing process. Most of what you need will be covered in a Statistics 101 course; while I haven’t read it myself, I’ve also had Kranzler’s Statistics for the Terrified recommended as an introduction to stats. Fortunately, you can get a good guess about a review’s value just from textual elements and from being familiar with the studies that were used by the review.

Before leaving the conclusions section, I want to circle back and talk a little about study authors who say dumb things in their papers. When it happens, it usually happens in the conclusion. Sometimes it will be a megaton of stupidity. I read one where a researcher compared weight-lifting Routine X to Routine Y, found Routine Y to be slightly better, and then wrote something to the effect of “Without a doubt, Routine Y is the best performance enhancer, and recommended for use by all athletes.” I imagine the odds are somewhat slim that this guy found the best routine ever.

You’ll also see conclusions that come out of left field. One, I remember, tested two different protein types against each other in terms of improving muscle gain. The differences were negligible, though, since one protein had more anti-oxidants (which has nothing to do with muscular hypertrophy) the study authors labeled it as the preferred choice.

Then there’s the “overworked” conclusion. To muck-up some of William of Ockham’s better-known quotes, the simplest solution to a problem is usually the best one. I’ve read more than a few studies that ignore a screamingly obvious point in order to posit a less-likely solution. Imagine a study (and I’m not deviating too far from an actual paper that’s been dissected at length by multiple fitness authors) that tests a protein supplement’s ability to build muscle; the supplement features an exotic new protein/carb/vitamin blend that digests quickly. The control group gets a whey protein shake and the experimental group gets the supplement—everything else is exactly the same. After testing, the supplement group gained more muscle, so the researcher declares that the supplement is more effective because of the amino/carb ratio and its quick digestion time.

That would make pretty good sense, except for one thing…the supplement had twice as many calories and twice as much protein as the whey shake. It’s well established that extra calories and protein are essential to muscle growth, so in this case, the likely answer is that these bonus nutrients—and not a heretofore unknown formula—led to the improved results.

What about outside of papers? Well, when it comes to talking to journalists, the dumb can get turned up a notch. For whatever reason, researchers get carried away when they’re talking to the media about their papers, which means conclusions are jumped to and errors in logic are made. Do yourself a favor and ignore the media fluff.

Finding Research

PubMed, a review-cataloging website operated by the National Institutes of Health, is the go-to source for papers, particularly since its search features allow you to easily sort abstracts from free full-text pieces. If you never tried it, go to pubmed.gov (this isn’t the formal link, but it’s a quick, easy-to-remember redirect), type “bench press,” “protein synthesis,” “football injury,” or something similar and see what results you get. Just now using “deadlift” as a search term, I turned up papers on strongman training, cycling recovery, the results of caffeine usage, and the effect of chains on deadlift velocity.

The downside is that the complete papers are heavily outnumbered by abstracts. You could search for topics of interest, then go to the websites of the journals themselves, though be warned that pay walls will ask for obscene wads of cash to read them—$30-50 bucks a pop is common. The best work-around (that doesn’t involve activities of dubious legality and/or ethics) is to take advantage of your local institute of higher education. Many of these schools subscribe to the journals themselves (especially big research institutions), and both these and schools with less library space will subscribe to online versions. For students, it’s as easy as logging in at home (or simply connecting to your school’s internet); for everyone else, schools are open to the public and in many cases you can walk in to a campus library and access a computer or Wi-Fi without any problem. At schools with stricter policies, you may be able to get temporary authorization to access their online materials; find the school library’s research desk (or webpage for the research desk) for more information. I’ll guess that your success here would be better at public schools.

Some journals to keep an eye on include:

  • Journal of Strength and Conditioning Research Journal of Nutrition
  • Journal of Clinical Endocrinology and Metabolism Nutrients
  • Int’l Journal for Vitamin and Nutrition Research Int’l J. of Sport Nutrition and Exercise Metabolism
  • Journal of the American College of Nutrition Journal of Physical Activity and Health
  • Journal of Sports Sciences Medicine and Science in Sports and Exercise
  • New England Journal of Medicine European Journal of Applied Physiology
  • American J. of Physical Medicine and Rehab. Journal of Applied Physiology
  • Clinical Journal of Sports Medicine Research in Sports Medicine

If you can’t read the articles themselves, the next best thing is to find guys who have access to the articles and can comment on them. While I hope to be your go-to here at elitefts™, a few people I recommend for their insight into strength/exercise-related scientific publications include: Alan Aragon, Lyle McDonald, James Krieger, Mark Young, Amby Burfoot, Alex Hutchinson, and the writers at the Sports Medicine Research blog. I consider all of these people to be sharp and honest. I’ve never known any to bend research for the sake of marketing a product or drumming up attention. Just keep in mind that they’ll look at items with varying degrees of detail that may or may not be what you’re looking for with a given paper.

Looking ahead, I’ll be addressing new studies and topics of interest to see how we can further our athletic pursuits.

* McDonald has a habit of tearing into people who misrepresent his work: he’ll burst onto a forum or comments section, blister the offender, and then recede into the darkness like Godzilla returning home after laying waste to Tokyo. While attention grabbing, I imagine his behavior isn’t fueled by publicity needs, but rather by a hatred of the irrational and misleading. And perhaps also by innate misanthropy.