Helpnote – Robust Clinical Evidence

From time to time, Radiocentre receives a claim that requires robust clinical evidence if it is to be cleared; a published, randomized, double-blind, controlled trial. Often, once this has been requested, we receive evidence that doesn’t fit the bill. Because we appreciate that few copywriters have a background in clinical research, we’ve drawn up the following guidance to make it clear what is being asked for.

Randomized, Double-Blind, Controlled Trials
The Randomized, Double-Blind, Controlled Trial (RDBCT) is the bedrock of clinical research, and is the standard of evidence that we require when faced with a new medical claim. It’s also a bit of a mouthful, so let’s break it down into manageable chunks.

Controlled Trials
A pharmacist has developed a new kind of pill for headaches. He wants to find out if the pill really works, so he designs an intervention study, a study that takes people who suffer from headaches, gives them the treatment, and sees how they respond to it. But he knows that normal headaches go away with time, so giving a sample of two hundred people his pill isn’t going to tell him an awful lot. People whose headaches clear up aren’t going to be able to tell whether the headache would have cleared even if they hadn’t taken the pill.i

In order, then, to determine whether the pill is effective, the pharmacist plans to divide the two hundred people into two groups. One group, Group A receives the new pill, while the other group, Group B receives nothing. If, at the end of the test, Group A has more people in it whose headaches have cleared than in Group B then it may be that the pill is effective.

But this still isn’t enough. We want to know if the pill itself is effective, which means we need to make everything about Group A and Group B’s experience as similar as possible. This isn’t trivial, because the pharmacist knows about the placebo effect; sometimes the experience of receiving a treatment is enough to cause people to get better. So Group A still receives a pill, but now Group B receives a dummy pill that is identical in appearance to the actual pill, but has no active ingredient. This is a “placebo-controlled trial”.

If our example trial were taking place in the real world, it would be unlikely that they’d measure the pill against a placebo. Where established treatments already exist, researchers will test against the existing treatment – they don’t generally need to find less-effective treatments than those they already know about.

Placebos don’t just exist for pills and potions. It’s common for researchers to create placebos for medical devices. In the past they have even given people placebo surgical procedures with some really quite startling results.ii

The pharmacist, however, must take it further than just creating a placebo group. He knows that if the trial subjects know which group they are in, that could be enough to spoil the data. People tend not to respond as well to placebo treatments if they know they are placebo treatments. So the people mustn’t know which group they are in. This would be a single-blind placebo-controlled trial.

Our Pharmacist also knows that he and his assistants may unconsciously cue people in either group as to which group they are in. Their behaviour may also be different in a way that directly affects the response each individual has to treatment. Clinicians may be exuberant, positive and excited when giving their sexy new pills to someone in Group A, but dull when giving someone in Group B their boring old placebo. So it’s a good idea, then, to make sure that none of the clinicians know which group is getting the real treatment.iii This is a double-blind placebo-controlled trial.

But there’s still more to add to the trial design to make sure the results are as accurate as they can be. The way in which people are assigned to groups can have an impact on the test results. Even something as seemingly fair as sticking everyone’s names on a list and alternating between A and B may have an undue influence on the results. The dilemma is that the groups need to be generated at random, but they also need to be as similar to each other as possible – there should be a similar age distribution, gender distribution and so on. Again, research scientists have developed techniques that make all this possible. It’s not something we need to deal with here, though. For Radiocentre purposes all we really care about is that randomization has taken place.

Publish and be damned!
So the pharmacist has arrived at a trial design that ticks all the boxes, has run it, and has received some exciting results. The next step for the research, and something that Radiocentre require, is that it is published in a peer-reviewed journal, where the study’s design and results can be scrutinised by other people who work in the field.

It’s important to note that, especially for break-through claims, a single, published study may not necessarily be enough to convince the world at large of the efficacy of a particular treatment.

In most cases, when we receive a trial that is sufficiently robust in its design, we will send it on to a consultant to take a look at. Occasionally, though, there may be apparent flaws in the study that suggest that there is no point in sending it on. An example of this would be a trial that claims to be double-blind but had significantly higher drop-outs from the placebo group, which would indicate that patients in this group were aware that they were not receiving the test medication.

If the study meets with the approval of the consultant then we will generally be able to clear the ad, possibly with some minor changes to the wording.

Stuff we don’t want

Pilot Studies and Small Studies
Before scientists do a full-scale trial, they will often run much smaller studies in order to test the design of the experiment. These pilot studies can be and are published, but will rarely provide enough data on which to prove a claim. The smaller the sample size, the more likely random elements (sudden death, spontaneous remissions) will skew the data

In order to get a consultant’s view on a particular piece of research we need a full copy of the research itself. An abstract, a brief summary of the trial, is unlikely to have sufficient detail on which to base an informed opinion.

Irrelevant Studies
Some studies may appear to be relevant to the product being advertised but will fall down due to selection. An example of this would be a dermal filler that is making a claim about stimulating collagen growth but has only been tested on patients with flesh-wasting conditions. Studies need to be looking at the treatment that is being advertised, in the result that is being claimed, within people to whom the ad is addressed. Even geography can be enough to make a study irrelevant!

Negative Findings
Believe it or not, the Radiocentre does occasionally receive trials for products that fail to find a positive effect.

FDA Approval
Radiocentre cannot take FDA approval as proof of efficacy. FDA approval appears to be quite lenient, and its remit does not extend beyond the borders of the United States.

MHRA Approval for devices
The Medicines and Healthcare products Regulatory Agency (MHRA) do not approve devices based on efficacy. Their regulation of medical devices is based on safety; they are not concerned about whether or not the Electromitt will cure your acne, only whether or not it will electrocute you.

Stuff that could be okay

Because running trials can be exp
ensive, and because it is possible for multiple trials to offer differing responses, researchers can perform metastudies, where they combine data from a number of different studies that all fulfill certain criteria. For example, they could take three similar trials, each with an individual sample of three hundred people, in order to simulate a trial involving nine hundred people.

Stuck In the Middle
A routine problem that we encounter is with beauticians and practitioners of alternative medicine who have bought in a treatment based on the marketing bumph that the manufacturer supplied, have used the device themselves on their clients, and based on their own experience believe the device works. It is only when they wish to advertise the device that they are faced with the requirement for robust clinical evidence, and in pursuing it, discover that the manufacturers have yet to perform any serious trials of the treatment.

We appreciate this can be frustrating for all involved, but without robust evidence we would only be able to clear an ad if it had literally no claims, implicit or explicit, about the product, e.g. a clinic may state the availability of a fat reduction cream, so long as its name is not a claim, and so long as the ad makes no mention, even indirectly, of slimming or weight loss.

Further Reading
If you’re interested in learning more about clinical evidence, the following are good starting places.

Bad Science by Ben Goldacre,
Trick or Treatment by Edzard Ernst & Simon Singh.

i This is why, although anecdotal evidence may be useful in finding treatments to study further, it can’t stand as proof of efficacy on its own.


iii There are complicated ways of achieving this, but essentially each individual’s supply of medication has a serial number which is catalogued by someone who has no direct access to the clinicians or the trial subjects; once the trial is over, the results for each individual is logged, and it is finally established who had been in which group.

Where next?

Commercial Radio at a Glance

Commercial Radio at a Glance

What's New

What's New

Get in touch

Get in touch