Most Supplement Companies Skip the RCT. Here’s Why the Few That Don’t Often Get It Wrong
The industry’s uncomfortable truth
The global dietary supplement market is worth hundreds of billions of dollars.[1] Brands make claims about energy, cognition, immunity, and longevity. Consumers buy on trust. And yet, if you ask how many of those brands have ever conducted a randomised controlled trial (RCT) on their actual product, on the specific formula, at the specific dose, in the population they are selling to- the answer is a surprisingly small number.
Most companies rely on one of two things instead. The first is consumer perception data: how customers report feeling after taking the product. The second is extrapolated single-nutrient evidence from the academic literature- studies on isolated ingredients, conducted independently, often in different populations, at different doses, over different durations. Neither of these tells you what the combined formula does in your specific consumer at your specific dose. They are proxies, and they are used because the real thing, a properly designed RCT is expensive, slow, and requires genuine scientific infrastructure that most companies simply do not have.[2][3]
The result is a credibility gap that affects the entire sector. And when the rare RCT does get commissioned, a further problem emerges because these trials are run so infrequently, the internal expertise to design them well often does not exist.
Infrequency breeds poor design
In pharmaceutical development, clinical trial design is a core competency. Companies run trials repeatedly, refine their methods across cycles, and build institutional knowledge about what works and what does not. Mistakes are costly, and so they get corrected.
In the supplement industry, RCTs are more commonly a one-time exercise usually commissioned to generate a headline finding rather than to build a genuine evidence base. Without repeated exposure to the full trial lifecycle, the internal expertise required to make sound methodological decisions simply does not develop. There is no iterative learning, no accumulated institutional knowledge, and no mechanism for correcting the same errors as they recur across successive studies. Understanding what those errors are is the first step toward raising the bar. Below I describe some of the most common methodological issues in clinical trials across the supplement industry.
Who you study matters as much as what you test
Even among supplement companies that do commission RCTs, there is a persistent tendency to study populations where a positive result is most likely and not populations that reflect their actual customers. A common example is enrolling participants who have been diagnosed with a clinical deficiency in the nutrient being tested prior to joining the trial. Actively recruiting patients with a confirmed deficiency will almost always produce a measurable response, but that finding says little about what the same supplement does in healthy individuals.
More broadly, trials are frequently conducted in individuals with diagnosed disease states, people with elevated cardiovascular risk, mild cognitive impairment, or metabolic dysfunction, where the physiological conditions make a response far more likely. The majority of supplement consumers are none of these things. They are healthy adults looking to maintain or support their wellbeing, a population in which detecting genuine benefit requires greater statistical power, more sensitive outcome measures, and longer follow-up periods.
When findings generated in clinical populations are carried into marketing materials aimed at healthy consumers, the science is not fabricated, it is simply being applied to a population it was never designed to describe. That distinction rarely makes it into the advertising and can be misleading to the consumer.[3]
Measuring the right things, the right way
Endpoint selection is where many supplement RCTs quietly fall apart. The temptation is to measure outcomes that are straightforward to collect rather than outcomes that are genuinely sensitive to the intervention. Subjective endpoints such as energy levels, mood, cognitive performance are common in nutritional research, but they require validated psychometric instruments, rigorous blinding, and careful consideration of placebo response rates, which tend to be high in this space. A custom questionnaire designed in-house and deployed over four weeks is not a substitute.
Objective measurements provide unequivocal evidence that an intervention can influence a specific outcome of interest but introduce their own challenges. Where biochemical endpoints are used, the laboratory infrastructure matters enormously. Certified, validated analytical labs, those accredited to run the specific assays required for the compounds being tested are a prerequisite for generating results that carry any scientific weight. They are also costly, and that cost is frequently avoided in favour of general-purpose clinical labs that may lack the precision required to detect meaningful change in nutrient-related biomarkers.
A related problem is the over-reliance on dietary intake data as a proxy for nutritional status. Self-reported food diaries and questionnaires are inherently imprecise, subject to recall bias, social desirability effects, and day-to-day variation that makes them a poor basis for scientific conclusions.[6] More fundamentally, they measure what a participant reports consuming, not what their body is actually absorbing and utilising. Biochemical blood analysis is the main reliable way to assess how much of a given compound is genuinely bioavailable and metabolically active in an individual.[7] Trials that substitute dietary recall for direct measurement of circulating nutrient levels are not measuring what they think they are measuring.
Duration and dose: the two commonly misjudged variables
Many supplement RCTs are designed around commercial timelines rather than biological ones. A four-to-six-week trial fits a marketing calendar. It does not necessarily fit the mechanism of action of the intervention being tested. Nutrients that work through structural, epigenetic, or cumulative pathways, think omega-3s and membrane composition, or B-vitamins and methylation, require time to exert measurable effects.[4] Designing a trial that is too short to detect the signal and then concluding the supplement is ineffective is a fundamental methodological error, and one that causes genuine harm to the evidence base.[5]
Dose is the other variable that is consistently mishandled but in a less obvious way than duration. A common and underreported problem is that the dose used within the trial does not match the dose the company recommends to consumers on the product label. It is not unusual for a trial to be conducted at two or even three times the stated daily serving, producing a measurable result that has no reliable bearing on what happens at the dose a consumer will actually take. The effect, if real, may be entirely dose-dependent but that caveat is rarely prominent in how findings are communicated. The consumer purchasing a product and following the recommended daily dose has no way of knowing that the evidence base was generated at a fundamentally different intake level. This is one of the more quietly misleading practices in supplement research, and one that pre-specified, label-dose-matched trial design would address directly.
The blinding problem nobody likes to talk about
Blinding is harder in nutritional research than in pharmaceutical trials, and it receives far less attention than it deserves.[8] Active ingredients can have distinctive sensory properties, fish oils have a certain consistency, vitamins can have a specific colour and some botanicals have a characteristic smell. A placebo that is detectably different from the active product compromises blinding, inflates placebo response, and introduces systematic bias into the result.
Developing a genuinely matched placebo, one that replicates the appearance, taste, texture, and any incidental physiological effects of the active product is technically demanding and expensive. For multi-ingredient formulations, the challenge is compounded: a placebo that neutralises the sensory signature of ten or more compounds simultaneously may itself require significant formulation work. It is a cost that many supplement companies are unwilling to absorb, particularly at the initial stages of clinical trial testing.
The consequence is that a significant proportion of supplement trials, particularly those described as pilots or feasibility studies are run without a placebo-controlled arm at all. This is not always disclosed prominently. An uncontrolled before-and-after design can show improvement on virtually any subjective measure simply due to the passage of time, participant expectation, or regression to the mean. Without a placebo arm, there is no way to separate the effect of the intervention from the effect of being in a trial. The resulting data is, scientifically speaking, very limited.
What good looks like- and why it matters
None of this is insurmountable. The methodology for running a rigorous RCT in nutritional science exists and is well established.[9] Pre-registration on ClinicalTrials.gov. Prospectively defined primary endpoints. Appropriate power calculations based on realistic effect sizes. Populations that reflect the intended consumer. Trial durations matched to the mechanism of action. Independent statistical analysis.
What it requires is a genuine commitment to scientific integrity over marketing convenience and a recognition that the two are not in conflict over any meaningful timescale. Consumers are becoming more sophisticated. Regulators are paying closer attention. The brands that will endure in this space are those building evidence that can withstand scrutiny, not evidence engineered to avoid it.
The supplement industry has an opportunity to mature. Running more RCTs is part of that. Running them well is the part that actually matters. The errors described here are not obscure edge cases- they are structural patterns that repeat across the industry, often because there has been no commercial pressure to correct them. That is beginning to change. As peer-reviewed publication becomes a more meaningful differentiator, and as consumers and healthcare professionals grow more capable of scrutinising the evidence behind the products they recommend and use, the standard will rise. The companies that invest now in genuinely rigorous science- the right populations, the right doses, the right duration, the right controls will be the ones that the scientific, clinical community and consumers will take seriously and trust.
Author Bio

Dr Harry Jarrett, Director of Science and Research at Heights
Dr Harry Jarrett is Director of Science and Research at Heights, where he leads the company’s award-winning clinical research programme, including the design and execution of randomised controlled trials to evaluate the efficacy of nutritional supplements. He holds a PhD in Human Nutrition and Population Health from Ulster University, alongside an MSc (Distinction with Postgraduate Commendation) and BSc (First Class Honours) from the University of Exeter.
Prior to Heights, Dr Jarrett served as COVID-19 Research Laboratory Coordinator at Guy’s and St Thomas’ NHS Foundation Trust, overseeing the research laboratories responsible for Phase I–III human clinical trials investigating the efficacy of novel COVID-19 vaccines during the pandemic.
His research has been published in peer-reviewed journals including the American Journal of Clinical Nutrition, and he has received awards from the Parliamentary and Scientific Committee, the Federation of American Societies for Experimental Biology, and the University of Exeter. He is an active member of the Parliamentary and Scientific Committee and an invited member of the Royal Society of Biology.
The views expressed in this article are those of the author and do not represent the editorial position of Life Science Daily News. Contributors may have a commercial interest in the topics they write about. For more information see our Contributor Policy














