The study addresses an appropriate and clearly focused question (The population studied · The risk factors studied · Whether the study tried to detect a beneficial or harmful effect?).
Did the authors use an appropriate method to answer their question? Is a case control study an appropriate way of Answering the question under the circumstances? (Is the outcome rare or harmful) Did it address the study question?
Were there enough subjects (employees, teams, divisions, organizations) in the study to establish that the findings did not occur by chance? Was the selection of cases and controls based on external, objective and validated criteria?
Were both groups comparable at the start of the study? Were objective and unbiased outcome criteria used? Is there data-dredging? Are objective and validated measurement methods used to measure the outcome? If not, was the outcome assessed by someone who was unaware of the group assignment (i.e. was the assessor blinded)?
The cases and controls are taken from comparable populations. The same exclusion criteria are used for both cases and controls. What percentage of each group (cases and controls) participated in the study? Comparison is made between participants and non-participants to establish their similarities or differences.
Cases are clearly defined and differentiated from controls. It is clearly established that controls are non-cases.
Were the cases recruited in an acceptable o way?
We are looking for selection bias which might compromise validity of the findings. Are the cases defined precisely?
Were the cases representatives of a defined population? (Geographically and/or temporally?)
Was there an established reliable system for selecting all the cases? Are they incident or prevalent? Is there something special about the cases?
Is the time frame of the study relevant to disease/exposure? Were there a sufficient number of cases selected? Was there a power calculation?
Measures will have been taken to prevent knowledge of primary exposure influencing case ascertainment.
Were the controls selected in an acceptable way?
Were the controls representatives of defined population (geographically and/or temporally)?
Was there something special about the controls?
Was the non-response high? Could non-respondents be different in any way?
Are they matched, population based or randomly selected? Were there a sufficient number of controls selected?
Exposure status is measured in a standard, valid and reliable way.
Was the exposure accurately measured to minimise bias?
Was the exposure clearly defined and accurately measured?
Did the authors use subjective or objective measurements?
Do the measures truly reflect what they are supposed to measure? (Have they been validated?)
Were the measurement methods similar in the cases and controls?
Did the study incorporate blinding where feasible?
Is the temporal relation correct? (Does the exposure of interest precede the outcome?)
What confounding factors have the authors accounted for?(Genetic · Environmental · Socio-economic).
Have the authors taken account of the potential confounding factors in the design and/or in their analysis?( Restriction in design, and techniques e.g. modelling stratified-, regression-, or sensitivity analysis to correct, control or adjust for confounding factors).
What are the results of this study? What are the bottom line results? Is the analysis appropriate to the design? How strong is the association between exposure and outcome (look at the odds ratio)? Are the results adjusted for confounding, and might confounding still explain the association? Has adjustment made a big difference to the OR?
The main potential confounders are identified and taken into account in the design and analysis. Confidence intervals are provided. Is the size effect practically relevant? How precise is the estimate of the effect? Were confidence intervals given?
How well was the study done to minimise the risk of bias or confounding?
Taking into account clinical considerations, your evaluation of the methodology used, and the statistical power of the study, do you think there is clear evidence of an association between exposure and outcome?
How precise are the results? How precise is the estimate of risk? Size of the P-value · Size of the confidence intervals · Have the authors considered all the important variables? How was the effect of subjects refusing to participate evaluated?
Do you believe the results? Big effect is hard to ignore! · Can it be due to chance, bias or confounding? · Are the design and methods of this study sufficiently flawed to make the results unreliable? · Consider Bradford Hills criteria (e.g. time sequence, dose-response gradient, strength, biological plausibility).
Can the results be applied to the local population? The subjects covered in the study could be sufficiently different from your population to cause concern · Your local setting is likely to differ much from that of the study · Can you quantify the local benefits and harms?
Do the results of this study fit with other available evidence? Consider all the available evidence from RCT’s, systematic reviews, cohort studies and case-control studies as well for consistency.
Could there be confounding factors that haven’t been accounted for?
Are the results of this study directly applicable to the patient group targeted by this guideline?
Can the results be applied to your organization?
Conflicts of interest are declared? |