Evidence-based vendors should have results to share from one or more clinical trials conducted by medical and research professionals trained in human subjects research. (Photo: Shutterstock)

When it comes to choosing a targeted solution for a specific clinical condition, the most challenging task can be differentiating between evidence-based and experimental offerings.

Vendors often use buzzwords like “clinically proven” and misleading statistics to sound compelling, but unpublished data, such as marketing claims, company white papers and testimonials are not the same as evidence-based medicine.

How pervasive is this issue? Out of 318,000 mobile health apps, a 2017 industry report found only 571 published studies testing apps in trials. Couple this with systematic reviews published in 2011 and 2014 that found companies receiving no return on investment for worksite health promotion programs, and the picture is clear: the workplace wellness industry has a reputation for under-delivering on its promises.

Related: 4 reasons corporate wellness in America is upside down

As the head of research for a digital health company focused on treating patients with type 2 diabetes, these are the three main questions I wish I heard from benefits leaders and consultants more often:

1. Is the wellness vendor measuring, reporting, and standing behind metrics tied to health outcomes?

Employers should be wary of vendors that latch onto a metric because it can be measured. In almost any instance, a singular metric is not perfect. A vendor should be expected to report on multiple metrics related to their main health claim, and they should be confident enough in their offering to set up commercial contracts that reflect their ability to achieve reported outcomes.

When you don’t ask this question, you risk partnering with a vendor that measures success with a convenient metric that does not necessarily impact the overall health outcome. For example, with the availability of miniature accelerometers, measuring steps has taken-off as a corporate wellness metric.

While exercise is beneficial for health in general, it is usually an ineffective approach to weight loss. In fact, the “10,000 steps a day’” mantra actually began as a 1960s Japanese marketing campaign, and it is usually justified by an overly simplified and discredited “eat less, exercise more” view of human metabolism. A 2013 Cochrane Systematic Review found very limited evidence that pedometer programs resulted in health benefits. Similarly, a 2017 systematic review found a very modest weight loss effect (<4 lbs).

2. Has the wellness vendor conducted a clinical trial to test your product or service, and is it registered at clinicaltrials.gov prospectively stating its outcome measures?

Evidence-based vendors should have results to share from one or more clinical trials conducted by medical and research professionals trained in human subjects research. This differs profoundly from other means of data collection including customer surveys and records reviews for two reasons:

  1. Trials require approval by an institutional review board (IRB) or ethical review board, and often require patient informed consent and advance registration.
  2. Trial data undergoes prospective versus retrospective analysis. This matters because primary and secondary outcome measures are defined before the trial is conducted in a prospective analysis. On the other hand, an investigator can interrogate the data from any number of perspectives in a retrospective analysis. This approach can illuminate interesting patterns, but it can also become a data dredging or so-called “p-hacking” exercise looking for any good news.

Post-hoc subgroup analysis is one type of retrospective analysis that can result in deceptive results. For example, a workplace wellness company, in their unpublished marketing materials, retrospectively focused on members with elevated risk factors on first measurement, highlighting that “63 percent lowered their triglyceride (TG) levels.” However, they made no mention of how much levels dropped or what happened to those who started with lower TG levels. Examining all enrolled participants in a prospective design with statistics would avoid this flawed analysis.

3. Are the vendor’s results published in a peer-reviewed journal, or is the submitted manuscript available from a pre-print server?

Even though publishing results in peer-reviewed journals usually take many months, employers should hold all of their vendors and their corporate marketing materials to this higher standard. Through peer review, impartial reviewers uncover flaws in methods, reasoning, and writing. While peer review is not always perfect, it aims to vet submissions so the final published product will stand the test of time. In addition, more journals are allowing scientists to post their submitted manuscripts in online, pre-print archives while their paper goes through the peer review process.

Employers should expect vendor marketing emails to pass third-party review by scientific experts. As an example of why this matters, a population health company claimed on their website that 75 percent of their customers “lost between 3.2 to 33.6 percent of their total body weight.” But, what did that really mean? Fortunately, the underlying results were published in a peer-reviewed journal. The true result was much less exciting – showing that the overall mean weight loss at four months was a disappointing 3.23 percent, and 71.4 percent of participants lost less than 5 percent of their body weight.

In short, every benefits professional should focus on building partnerships with vendors that provide peer reviewed evidence on the efficacy, safety, sustainability, clinical scope and real-world impact of their intervention.


Read more:


James McCarter, MD, PhD, is the head of research for San Francisco-based Virta Health, a tech-enabled nationwide medical provider that delivers the first clinically proven treatment to safely and sustainably reverse type 2 diabetes without medications or surgery.