552 Shares

Clinically proven! We’ve all seen it plastered across health products and strategically peppered through advertisements and marketing scripts.

Just the other day, I saw an advertisement on television for a metabolism booster pill which claimed to be “clinically proven” to work.

But wait! What the heck DOES “clinically proven” actually mean anyway?

There are a number of aspects to consider in answering this question, so I’ll list them out point by point.


“Clinically proven” definition

First and foremost, within the realm of health product advertising, there is no official definition or regulation of the terms “clinically proven.”

This can mean different things to different people and is often used by marketers to give a deceptive stamp of approval to a product which, in many cases, has no legitimate scientific evidence to support its efficacy.

Clinically proven? Says who?

When they say “clinically proven,” your first question should be, “oh yeah, says who?”

It is possible the company selling the supplement, infomercial ab gadget, or balance device did a poorly controlled “study” where they had people try the product and then tell the company about their results.

While this seems logical enough, it does not constitute legitimate research. Real research requires very careful and meticulous planning, experimental design, strict methods, statistical analyses, and interpretation in order to ascertain if, in fact, any results were due to the intervention (i.e., the supplement or exercise).

Two random examples (click images to expand):

clinically proven
clinically proven

Also check out my Wonder Core Smart article and you’ll see how they ambiguously cite “university lab testing” but provide no details to corroborate this.

Clinically proven in a research context

Research must be put into context.  Real research, as described above, must be carefully interpreted and applied to different life situations (i.e., is it relevant?).

For example, a study which used a VERY large dose of a dietary supplement to elicit relatively small reductions in body fat in morbidly obese middle-aged women living in a metabolic ward is VERY different from an 18 year old athletic male university student taking one-tenth the dosage of the same supplement.

The reason you can’t compare the two is because a morbidly obese middle-aged woman living in a metabolic ward is going to have a very different physiological response than a young, healthy, fit male university student.

Then consider the experimental dosage. The women in the study used a large dose where the university student used only a fraction of the dose.  It’s the same thing as taking 800mg vs. 8mg of ibuprofen for a headache.  You expect the 800mg to do something but, in all honesty, you don’t expect a Pink Floyd laser light show from the 8mg.


Were the results published in a journal?  

Scientists often prefer to see the results of studies published in peer-reviewed medical journals.

What does this mean in practical terms?  It means that the study and all its methods, results, and discussion have been reviewed by experts (peers) in the respective area of research under which that study falls.

These experts systematically dismantle the study, rake it over the proverbial coals, and try to blow holes in it, find weaknesses, and expose it for junk science. If it survives that, then it is accepted for publication (usually with suggested revisions).

The value of this process is that it shows the scientists conducting the research have been rigorous in their experimental protocols and that the research is worthy.

Research that isn’t worth its salt

Sometimes research is conducted but it never appears in a peer-reviewed journal.

There are a number of reasons for this but, in many cases, the work WAS submitted but was not worthy of publication.  Other times the research is not submitted for review at all because the scientists know it isn’t up to scratch.

Unregulated jargon

As I said above, there is absolutely ZERO regulation of the terms “clinically proven” so marketers have a number of options for hoodwinking the general public.

Marketers can cite junk science which isn’t worth the paper upon which it’s written.  This is when they do an impromptu survey of their “satisfied” users and ask them for their subjective opinions.

There are no experimental controls so we have no real way of knowing the “results” were from the product or other uncontrolled factors (i.e., they started eating less and exercising more).

They cite legitimate peer-reviewed research but it is completely irrelevant or a major stretch to the product for sale.  As stated above, they cite research from morbidly obese women but they’re marketing it to young athletic men.


They cite a single study which may relate to their product but it has methodological flaws.  Usually limitations are mentioned in the study regarding the real life applicability of the results, but companies looking to make a buck often fail to disclose these limitations.  Not very ethical.

They cite a single study which might have solid methods and is published in a high quality peer-reviewed journal.  However, one single study is not a conclusive body of evidence to go and make sweeping claims that something is “proven” to work.

Responsible scientists like to see a number of studies using different dosages across different populations in order to get some sort of scope on the relative effectiveness of a product.

The bottom line

In closing, it is very much a case of buyer beware. “Science-y” jargon might sound all flashy but when it comes to making a buck, you have to switch on your bullsh*t detector and do your own investigation.  Trust your instincts.  If a pill, potion, or gadget seems too good to be true, then it probably is.

552 Shares

Dr Bill Sukala is a Sydney-based health science communicator, clinical exercise physiologist, health writer, speaker, and media health commentator. He has published health articles in major publications around the world and has given invited lectures across five continents. Click here for more information or follow Bill on Facebook, Instagram, and Twitter.

4 Comments

  1. Kathy Immelman

    Hi Dr Sukala,

    I wonder if you have any good examples of these ads that claim “clinically proven” or “scientifically proven”? I’m always searching for interesting new material for my postgrad class in critical reading. Anything you can send me would be much appreciated. Thanks!

    Reply
    • Bill Sukala,PhD

      Hi Kathy,
      Thanks for your comment. I have amended the article to include two graphics of “clinically proven.” The first one has a single research study in support of an ab slimming belt, but the results mainly apply to localized abdominal strength and endurance and not fat loss. The second image shows how desperate they are to overcome objections by including the terms clinically proven four times. I would also suggest having a look over my article You Are What You Eat But Careful Who Says So
      Kind regards,
      Bill

      Reply
  2. Dr Elbanna

    from where you obtain “clinically proven certificate”?? for new supplement. and how much does cost?
    Regards,

    Reply
    • Dr Bill Sukala

      Hi Dr Elbanna, For a new supplement to be clinically proven, you will first need to conduct a very thorough review of the existing medical literature on the active ingredients you intend to study. Then put together a proposal for clinical research and then submit it to an ethics review board with your application for ethics approval. Once you receive that, then you’ll need to begin the recruitment process, which can be very long and arduous, but worth it in the end. Then you’ll need to conduct your randomised, double-blind, placebo-controlled study on a sufficient enough number of subjects to ensure that you have enough statistical power to detect changes from one group to the other. Then, once you have all your results, you’ll need to run statistics on them to determine if there are any statistically significant differences between groups. But statistical significance is only one part of it. You’ll also need to determine if those differences are clinically meaningful. For example, if a supplement resulted in blood sugar dropping by 0.2 mmol/L then is this clinically meaningful? Will it result in improvements in overall health of the patients that experienced these improvements. In other words, something can be “statistically significant” but in practical terms, it can still be clinically insignificant. Finally, once you’ve written your report and acknowledged all the strengths, limitations, and implications for your research, then you will need to submit your manuscript to a peer-reviewed medical journal for review by experts in the area in which your study was conducted. So you’ll probably need to have your work reviewed by clinical nutrition researchers who understand the biochemistry of the supplement in question. If your research is valid and isn’t junk science, then you’ll probably have an opportunity to revise the manuscript for publication. Then you’ll resubmit it and hopefully get final approval. Then it will sit in a queue for 6 months to a year and eventually get published. Ideally, you will have multiple studies that you have conducted and published and once you have these published articles in your hand, then these will be your “clinically proven certificates.”

      Or you could just take the supplements yourself and try to make yourself believe they’re doing something so you could claim they are “clinically tested.” But if you wrote that on your packaging, then you’d be lying and that is unethical and illegal (i.e., false marketing claims). Invest the time and effort in getting real clinical trials done and then you might have a product with staying power. But if you conduct the research and the supplement is no better than a placebo, then basically pack your bags and move on to the next thing because that supplement is a waste of your efforts. Hope this helps clarify.

      Reply

Submit a Comment

Your email address will not be published. Required fields are marked *