Sample Size

A trial should be big enough to have a high chance of detecting a worthwhile effect if it exists and this be reasonably sure it doesn't exist if it is not found.

 

Factors Determining Sample Size

 

Level of clinically significant effect

A clinically significant difference in outcomes is not the same as a statistically significant difference. For example, a decrease in blood pressure of 10 mmHg could be statistically shown to be due to a given treatment but have limited impact on a patient's risk of cardiovascular disease.

A statistical nomogram can be used to determine the number of subjects required to demonstrate an effect, if it exists (the power of a study).

 

The Power of a Study

The power of a study is the ability to demonstrate an association, if it exists, thus representing the capacity to avoid β (type II error), or 1- β.

Power is determined by:

Underpowered studies are very common, usually because of difficulties recruiting patients. This often leads to a type II, or β error, which erroneously concludes that an intervention has no effect.

 

Small, "underpowered" studies are less likely to find a real difference as significant.

Beta, and accordingly power, should be fixed at the time of study design to determine optimal sample size; a sample size calculator can be of use when doing this.

A useful exercise is to select 'compare proportions for two samples' and to alter the p1 and p2 or the power and observe effects on required sample sizes.