I EAT THEREFORE I AM… A NUTRITIONIST!
If it were really that simple, we’d all be experts by now! It seems like every time we flick through social media, there’s some self-proclaimed Insta-health guru telling us what we should and shouldn’t eat. “Try this diet, take that supplement.” But how can we know who to listen to?
Sifting through the science
The sheer volume of information available is overwhelming, particularly with the rise in popularity of social media, and it’s no easy task sifting through the claims and determining which have merit and which are nothing more than hollow marketing promises.
The internet has spawned an entire industry of “pay-for-online-qualifications.” There are all kinds of “nutrition certificates” out there you can buy and with enough cash and a small bunny hop over a very low bar, you too can have a “doctorate” in something fancy-sounding like “nutritional medicine.” Many of these schools are predatory in nature, charging exorbitant fees for “courses” that promote fad diets and diagnoses. Whilst many of the students may have good intentions, they are, in many cases, being trained as unwitting quacks.
Unfortunately, many of these online nutrition “doctors” are social media savvy and have done an excellent job at promoting their own brands of nutrition nonsense. They often have no scientific evidence to support their claims, relying exclusively on anecdotal testimonials (i.e., it worked for me, so therefore it’s good for everyone).
Legitimate science is slow-moving and it isn’t sexy. Responsible health practitioners make their recommendations based on hard science and also know that, when it comes to health and medical decisions, there is no one-size-fits-all approach.
The following checklist is a valuable tool for evaluating the science behind nutrition claims. Keep a copy close at hand and refer to it when considering new research findings.
Tips for scrutinising scientific research
Number of studies
Consider how many studies were conducted. A single study might suggest efficacy, but numerous studies conducted by a variety of researchers from independent labs without vested interests would hold more weight.
Number of subjects
The higher the number of subjects in the study, the better. More subjects give a greater degree of statistical power. That is, we can say with reasonable confidence that the results were due to the intervention and not to random chance.
Look for consistency in the dosages employed in the studies and what is found in commercially available diets/products. If large dosages were used in the studies, say 1000mg, then how does this compare to the comparatively small dosages (i.e. 10mg) used in commercial products? We need to compare “apples to apples” and “oranges to oranges.”
In the case of dietary supplements, many nutrition products are cocktails comprised of a number of ingredients. If a study was conducted on just one ingredient, then it’s difficult to confirm that a mixed commercial product would yield the same results. Cross-ingredient interactions might potentiate the effect and pose safety issues as was the case with combined herbal preparations containing ma huang (ephedra) and guarana (caffeine).
One size does not fit all. Look at the population group upon which the research was conducted and consider how it applies to real life situations. For example, it is difficult to apply results from a study on young, university-level female athletes to bed-ridden morbidly obese, middle aged diabetic women since their metabolisms would be markedly different.
Consider how “life-like” the experimental conditions were. For example, a diet study conducted on elderly cardiac patients living in a metabolic ward for a month would reflect very different conditions to a young, free-living adult subject to a variety of real-life factors.
Appropriate methodological controls help to ensure that the results are due to the intervention and not to random chance. Ideally, a study should be randomised, controlled, and, when appropriate, double blind—neither the subjects nor investigators know who received the experimental or control intervention.
Confirm that the studies were published in reputable peer-reviewed journals. While even this is not a 100% guarantee, it at least confers a higher level of academic scrutiny to minimise bias and ensure the integrity of the research.
If You Can’t Convince ‘Em Confuse ‘Em
While claims based on science are always preferred, many diet book authors and product manufacturers are determined not to let the truth get in the way of a good marketing campaign. Clearly not everyone’s a research scientist, but we all have a built-in boloney detector that can help keep us from getting taken for a ride. Cut out and give the following quick reference checklist to your clients.
Saturday 5th of July 2014
I will keep this article on hand you seem very knowledgable. I have started on Laminine by LifePharm Global, it has hit Australia but have my doubts after finding Ripp Off Reports. Does anyone have any info on this product or company. I am hoping for miracles from this product but am afraid I may not get it.
Monday 30th of January 2012
"The higher the number of subjects in the study, the better. More subjects give a greater degree of statistical power."
It's probably worth clarifying that this be a like-with-like comparison, to avoid it being misinterpreted by someone unwittingly comparing a clinical trial (i.e. a potentially small number of subjects, but under tightly controlled conditions) with an epidemiological study (i.e. a population based study, but less tightly controlled), purely on the basis of the number of subjects.
It's also worth knowing about the type of study so you can spot inappropriate conclusions in media articles, e.g., you get titles like "Sunscreens questioned as big study reveals Vitamin D deficiency" only to find the study was actually about the effects of Vit D supplementation in aged-care residents in Scotland and sunscreen was not even a factor in the study.
Bill Sukala, PhD
Monday 30th of January 2012
Hi Nick, thanks for your comment. I couldn't agree more. I think it was beyond the scope of this article to get too neck deep in all the nuances of statistics. In this context, I was merely referring to a general need to have adequate subjects (as opposed to N=1) to ensure some iota of statistical power. Clearly this is also going to depend on the outcome measures being evaluated, %CV of the assay, and what constitutes a clinically meaningful change (enter discussion on effect size and magnitude based inferences vs. sole reliance on P values). And when the smoke clears and the dust settles, there's always the question of practical applicability of the results and how it applies to the general population. I know I'm preaching to the choir telling you this. Again, in this particular post, it was more a general guide rather than a specific tutorial on statistical robustness of one single factor.