Literature Appraisal

last authored:
last reviewed:

 

 

Introduction

Is this the same as evidence-based medicine? what can this section be called?

 

Effective health care relies on a solid knowledge and evaluation of existing research to make the best evidence-based decision whenever possible.

 

 

 

Assessing Methodological Quality

When analyzing a paper for its methological value, a number of questions may be considered:

 

1) Was the study original?

2) Who is the study about?

3) Was the study design sensible?

4) Was clinical bias avoided or minimized?

5) Was assessment 'blind'?

 

6) Were preliminary statistical questions dealt with?

 

 

 

Study Validity

There are many things that can erode a study's validity.

  • accuracy
  • bias
  • confounding
  • standardization
  • event modifiers

Accuracy

One of the most significant is the study's accuracy - the degree to which the study's findings are free from error. Accuracy involves two components:

Precision/Reliability

  • the degree to which random, or nonsystematic error, is free from the study.
  • random error can result from random measurement error or sampling error
  • reliability is the degree to which results can be replicated: inter-rater or intra-rater
    • Cohen's Kappa is a statistic designed to measure reliability: below 40% is poor; above 75% is excellent

Validity

  • the degree to which systematic error is absent
  • measures applicability of results to sample (internal validity) and to population (external validity, also called generalizability)
  • internal validity: degree to which results are not due to bias or confounding
  • external validity: affected by topic or subject matter

Bias

Bias is the systematic deviation from truth due to any trend in the collection, analysis, interpretation, publication, or review of data

  • effect on internal validity is the more serious

selection bias

  • occurs primarily in design phase
  • threat to internal validity: participants chosen from different target groups
  • threat to external validity: systematic differences in those who take part from the population as a whole
  • self-selection can lead to poorer results, ie if the sickest patients are most willing to try new therapies

measurement bias

  • how the instrument (a scope or a survey) might systematically over- or under-estimate what it is trying to do
  • recall bias: systematic differences in the accuracy of completeness of recall of past events or experiences
  • interviewer bias: subconscious or conscious gathering of selective data

controlling bias

  • the key is prevention in design and execution
  • randomization, blindization, and stanardization are great

Confounding

Confounding is the confusion of the effects of variables, where an additional variable may be responsible for an apparent assication or outcome. Confounding leads to systematic error, and is actually a form of bias.

 

  • confounders can either cause or prevent outcomes of interest, and is associated with, but not caused by, the risk factor/exposure
  • confounders are distributed differently across study groups, leading to its differential effects on outcome
  • stratifying by confounders and again looking for relationship helps identify their presence and control for them

Methods for Controlling for Confounding

  • Designing a randomized trial, restricting who can participate, or matching participants according to confounders
  • Doing a stratified analysis to separate participants into subgroups, or include the confounder (or multiples) in logistic regression models

Standardization

A technique for removing, as much as possible, the effects of differences in confounding variables when comparing two or more populations.

Standardization is an adjustment of the crude rate of a health-related event to a rate comparable with a standard population.

Event Modifiers

Event modifiers are third variables that aliter the direction or strength of association between two other variables. They are useful things to know and should be looked for

  • differs from confounders in that its influence varies depending on its strength
  • can be analyzed using stratification or regression

return to top

 

 

Understand Internal and External Validity

Internal validity refers to the validity within the study (did the researchers use valid methods to test the sample size within the experiment), whereas external validity is the degree to which the results of the experiment can be generalized to a larger population outside the sample used in the experiment.


Understand basics of bias and confounding and how to control each

Bias is a systematic error in a study that can lead to erroneous results (something is wrong with the study that skews the results away from the truth).


The main two categories of bias according to MacPherson are (although in reality there are many forms of bias that do not fit neatly in either category):

  1. Selection bias: a bias that occurs in the way the sample was selected Example: if you attempted to uncover the average amount of beer consumed by Nova Scotia residents but only sent the survey out to students at Dalhousie – who may drink more than the average Nova Scotian.

  2. Measurement bias: an error inherent in the method of measurement itself. Example: asking a person to tell you how many sex partners they’ve had in the past month – if it’s Steve may be inclined to play down the numbers so that Moeller doesn’t get jealous – this is called recall bias. Or if it’s Moeller interviewing Steve he may ignore evidence that Steve may have some action on the side, this is called interviewer bias.


To control for bias you must set up your experiment as best possible (this is mostly common sense). As a general rule, to avoid selection bias you randomize participants to different study groups. To avoid measurement bias use standardized and objective measurements.


Confounding variables (a.k.a. Confounder) are another source of bias. “Confounders” are a variable other than the variable being examined that could explain for all or part of the results. Example: An experiment may try and examine if there is a correlation between coffee drinking and lung cancer, and finds a high correlation. However, the confounding variable of smoking in fact explains the finding (i.e. many coffee drinkers smoke, and this is why many coffee drinkers end up with lung cancer).


Controlling for confounding variables is the same as bias (common sense): Randomize participants to treatments, and assess possible factors that could skew results if you cannot randomize completely (i.e. if you are looking over records you can’t go back in time and reassign a person to a different category).


Understand when direct and indirect standardization are used to compare rates

Standardization is used to remove confounding variables when comparing rates between two groups.


Direct standardization converts the sample population to the same configuration (usually for age) of some larger population, which thereby allows you to compare it to the larger population.


Indirect standardization is used when the sample populations rates are not stable or unknown. So the larger populations rates are reconfigured to match the sample populations configuration (usually for age)


Measures of Association

RR = Relative risk is the ration of the outcome risk is the exposed group as compared to the unexposed group (slide 4 in risks and rates continued)

OR = Odds Ratio (seen above)

RD = Risk difference (seen above)

NNT = 1/RD = Number needed to treat (seen above)

return to top


 

 

Resources and References

Sullivan I. et al, 2004. Framingham...

NHS Evidence - Health Information Resources

BMJ Qualitative Research papers -Tricia Greenalgh

return to top