Take a look at the Recent articles

Critical Analysis of Case-control Studies in Cardiovascular Research

Leonardo Roever

Federal University of Uberlândia, Department of Clinical Research, Brazil

E-mail : bhuvaneswari.bibleraaj@uhsm.nhs.uk

DOI: 10.15761/HPC.1000195

Article
Article Info
Author Info
Figures & Data

Introduction

It is a research in which the participants are selected between individuals who already have the disease (cases) and between individuals who do not have it (controls); in each of these two groups, the number of exposed individuals is verified, to some risk factor. The goal is to check the possible causal association between exposure to risk factors and disease in study. If the factor is associated with the disease, the factor ratio between the cases will be greater than the same proportion between the controls. This type of study has great application for assitutions in that the disease is relatively infrequent and the time elapsed between exposure to risk and evidence of its effect is long. Case-control studies have limited ethical there is no intervention or prospective observation of risk exposures [1-7].

Selection of participants

Case selection

The location of cases and controls depends on the characteristics of the disease in study. Cases can be identified in hospitals, specialized clinics or health services (eg, cases of heart failure). It is possible to do a population search of cases, such as biomarkers levels [1-7].

Selection of controls

The search for controls should follow, as a general guideline, the principle if the control would be a case, it would be found where cases are being found. Controls can be recruited in hospitals where cases have been obtained, in the vicinity of cases, in the same schools, among friends and co-workers of the cases, in the general population under probabilistic sample scheme. In any situation there will be advantages and disadvantages, always with possibility of biased results. Controls obtained by suggestion of the cases themselves can be very similar in their behaviors and customs, and if the risk factor studied is related to habits that may be common among friends, will not be detected. The cost and difficulties in obtaining population controls make this approach practice. In the context of infectious diseases, subclinical and clinical forms of the disease can be detected. The strategy to be adopted to select the control group depends on the objective of the study [1-7].

Types of studies

Case-control population-based

In this type of outline cases and controls are population; cases can be detected through population screening, defined over a period of time. You can use hospitals to identify all possible cases in the study area or a random sample there. Controls are selected through a probabilistic sample of individuals without belonging to the same geographical area as the cases [1-7].

Nested case-control

This is a design in which cases and controls are selected in the from a predefined cohort, in which some information about exposures an already available. For each case, controls are randomly selected from individuals at risk at the time of the case diagnosis, which pairing by the time-confusing effect. Additional information is collected and analyzed at the time of the selection of incident cases and controls [1-7].

Case-control studies are based on a group of individuals affected by the disease in study patients, comparing them with another group of individuals who should be at all similar to cases, differing only for not having that disease, the controls (Figure 1).

Figure 1. Design of a Case-control study

The Table 1 shows the checklists needed to make a critical analysis of case control studies [1-7].

Table 1. Critical appraisal of case control studies

Appraisal questions

The study addresses an appropriate and clearly focused question (The population studied · The risk factors studied · Whether the study tried to detect a beneficial or harmful effect?).

Did the authors use an appropriate method to answer their question?  Is a case control study an appropriate way of Answering the question under the circumstances? (Is the outcome rare or harmful)  Did it address the study question?

Were there enough subjects (employees, teams, divisions, organizations) in the study to establish that the findings did not occur by chance? Was the selection of cases and controls based on external, objective and validated criteria?

Were both groups comparable at the start of the study? Were objective and unbiased outcome criteria used? Is there data-dredging? Are objective and validated measurement methods used to measure the outcome? If not, was the outcome assessed by someone who was unaware of the group assignment (i.e. was the assessor blinded)?

The cases and controls are taken from comparable populations. The same exclusion criteria are used for both cases and controls. What percentage of each group (cases and controls) participated in the study? Comparison is made between participants and non-participants to establish their similarities or differences.

Cases are clearly defined and differentiated from controls. It is clearly established that controls are non-cases.

Were the cases recruited in an acceptable o way?

We are looking for selection bias which might compromise validity of the findings. Are the cases defined precisely?

Were the cases representatives of a defined population? (Geographically and/or temporally?)

Was there an established reliable system for selecting all the cases? Are they incident or prevalent? Is there something special about the cases?

Is the time frame of the study relevant to disease/exposure? Were there a sufficient number of cases selected? Was there a power calculation?

Measures will have been taken to prevent knowledge of primary exposure influencing case ascertainment.

Were the controls selected in an acceptable way?

Were the controls representatives of defined population (geographically and/or temporally)?

Was there something special about the controls?

Was the non-response high? Could non-respondents be different in any way?

Are they matched, population based or randomly selected? Were there a sufficient number of controls selected?

Exposure status is measured in a standard, valid and reliable way.

Was the exposure accurately measured to minimise bias?

Was the exposure clearly defined and accurately measured?

Did the authors use subjective or objective measurements?

 Do the measures truly reflect what they are supposed to measure? (Have they been validated?)

Were the measurement methods similar in the cases and controls?

 Did the study incorporate blinding where feasible?

Is the temporal relation correct? (Does the exposure of interest precede the outcome?)

What confounding factors have the authors accounted for?(Genetic · Environmental · Socio-economic).

Have the authors taken account of the potential confounding factors in the design and/or in their analysis?( Restriction in design, and techniques e.g. modelling stratified-, regression-, or sensitivity analysis to correct, control or adjust for confounding factors).

What are the results of this study? What are the bottom line results? Is the analysis appropriate to the design? How strong is the association between exposure and outcome (look at the odds ratio)? Are the results adjusted for confounding, and might confounding still explain the association? Has adjustment made a big difference to the OR?

The main potential confounders are identified and taken into account in the design and analysis. Confidence intervals are provided. Is the size effect practically relevant? How precise is the estimate of the effect? Were confidence intervals given?

How well was the study done to minimise the risk of bias or confounding?

Taking into account clinical considerations, your evaluation of the methodology used, and the statistical power of the study, do you think there is clear evidence of an association between exposure and outcome?

How precise are the results? How precise is the estimate of risk? Size of the P-value · Size of the confidence intervals · Have the authors considered all the important variables? How was the effect of subjects refusing to participate evaluated?

Do you believe the results? Big effect is hard to ignore! · Can it be due to chance, bias or confounding? · Are the design and methods of this study sufficiently flawed to make the results unreliable? · Consider Bradford Hills criteria (e.g. time sequence, dose-response gradient, strength, biological plausibility).

Can the results be applied to the local population? The subjects covered in the study could be sufficiently different from your population to cause concern · Your local setting is likely to differ much from that of the study · Can you quantify the local benefits and harms?

Do the results of this study fit with other available evidence? Consider all the available evidence from RCT’s, systematic reviews, cohort studies and case-control studies as well for consistency.

Could there be confounding factors that haven’t been accounted for?

Are the results of this study directly applicable to the patient group targeted by this guideline?

Can the results be applied to your organization?

Conflicts of interest are declared?

Rate the overall methodological quality of the study, using the following as a guide:

High quality (++): Majority of criteria met. Little or no risk of bias.

Acceptable (+): Most criteria met. Some flaws in the study with an associated risk of bias.

Low quality (-): Either most criteria not met, or significant flaws relating to key aspects of study design.

Reject (0): Poor quality study with significant flaws. Wrong study type. Not relevant to guideline.

Use this checklist can improve the evaluation of case control studies.

References

  1. Guyatt G, M.O. Meade, D.J. Cook, D. Rennie (2014) Users' Guides to the Medical Literature: A Manual for Evidence-based Clinical Practice. (3rd Edn), McGrawHill Companies, New York.
  2. Sackett DL, Richardson WS, Rosemberg WS, Rosenberg W, Haynes BR (2010) Evidence-Based Medicine: how to practice and teach EBM. (2nd Edn), Churchill Livingstone, Journal of Intensive Care medicine.
  3. CASP (2019) Public Health Resource Unit. Critical Appraisal Skills Programme, Institute of Health Science, Oxford.
  4. http://www.sign.ac.uk/methodology/checklists.html
  5. http://media.wix.com/ugd/dded87_63fb65dd4e0548e2bfd0a982295f839e.pdf
  6. http://www.cebma.org/wp-content/uploads/Critical-Appraisal-Questions-for-a-Case-Control-Study.pdf
  7. Sandven I, Abdelnoor M (2010) Critical appraisal of case-control studies of risk factors or etiology of Hyperemesis gravidarum. Arch Gynecol Obstet 282: 1-10. [Crossref]

Editorial Information

Editor-in-Chief

Kohei Akazawa
Niigata University Medical and Dental Hospital, Japan

Article Type

Mini Review

Publication history

Received date: June 13, 2020
Accepted date: August 10, 2020
Published date: August 13, 2020

Copyright

©2020 Roever L. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Citation

Roever L (2020) Critical analysis of case-control studies in cardiovascular research. Health Prim Car 4: doi: 10.15761/HPC.1000195

Corresponding author

Leonardo Roever

MHS, PhD, Department of Clinical Research, Av. Pará, 1720 - Bairro Umuarama, Uberlândia - MG - CEP 38400-902, Brazil.

E-mail : bhuvaneswari.bibleraaj@uhsm.nhs.uk

Figure 1. Design of a Case-control study

Table 1. Critical appraisal of case control studies

Appraisal questions

The study addresses an appropriate and clearly focused question (The population studied · The risk factors studied · Whether the study tried to detect a beneficial or harmful effect?).

Did the authors use an appropriate method to answer their question?  Is a case control study an appropriate way of Answering the question under the circumstances? (Is the outcome rare or harmful)  Did it address the study question?

Were there enough subjects (employees, teams, divisions, organizations) in the study to establish that the findings did not occur by chance? Was the selection of cases and controls based on external, objective and validated criteria?

Were both groups comparable at the start of the study? Were objective and unbiased outcome criteria used? Is there data-dredging? Are objective and validated measurement methods used to measure the outcome? If not, was the outcome assessed by someone who was unaware of the group assignment (i.e. was the assessor blinded)?

The cases and controls are taken from comparable populations. The same exclusion criteria are used for both cases and controls. What percentage of each group (cases and controls) participated in the study? Comparison is made between participants and non-participants to establish their similarities or differences.

Cases are clearly defined and differentiated from controls. It is clearly established that controls are non-cases.

Were the cases recruited in an acceptable o way?

We are looking for selection bias which might compromise validity of the findings. Are the cases defined precisely?

Were the cases representatives of a defined population? (Geographically and/or temporally?)

Was there an established reliable system for selecting all the cases? Are they incident or prevalent? Is there something special about the cases?

Is the time frame of the study relevant to disease/exposure? Were there a sufficient number of cases selected? Was there a power calculation?

Measures will have been taken to prevent knowledge of primary exposure influencing case ascertainment.

Were the controls selected in an acceptable way?

Were the controls representatives of defined population (geographically and/or temporally)?

Was there something special about the controls?

Was the non-response high? Could non-respondents be different in any way?

Are they matched, population based or randomly selected? Were there a sufficient number of controls selected?

Exposure status is measured in a standard, valid and reliable way.

Was the exposure accurately measured to minimise bias?

Was the exposure clearly defined and accurately measured?

Did the authors use subjective or objective measurements?

 Do the measures truly reflect what they are supposed to measure? (Have they been validated?)

Were the measurement methods similar in the cases and controls?

 Did the study incorporate blinding where feasible?

Is the temporal relation correct? (Does the exposure of interest precede the outcome?)

What confounding factors have the authors accounted for?(Genetic · Environmental · Socio-economic).

Have the authors taken account of the potential confounding factors in the design and/or in their analysis?( Restriction in design, and techniques e.g. modelling stratified-, regression-, or sensitivity analysis to correct, control or adjust for confounding factors).

What are the results of this study? What are the bottom line results? Is the analysis appropriate to the design? How strong is the association between exposure and outcome (look at the odds ratio)? Are the results adjusted for confounding, and might confounding still explain the association? Has adjustment made a big difference to the OR?

The main potential confounders are identified and taken into account in the design and analysis. Confidence intervals are provided. Is the size effect practically relevant? How precise is the estimate of the effect? Were confidence intervals given?

How well was the study done to minimise the risk of bias or confounding?

Taking into account clinical considerations, your evaluation of the methodology used, and the statistical power of the study, do you think there is clear evidence of an association between exposure and outcome?

How precise are the results? How precise is the estimate of risk? Size of the P-value · Size of the confidence intervals · Have the authors considered all the important variables? How was the effect of subjects refusing to participate evaluated?

Do you believe the results? Big effect is hard to ignore! · Can it be due to chance, bias or confounding? · Are the design and methods of this study sufficiently flawed to make the results unreliable? · Consider Bradford Hills criteria (e.g. time sequence, dose-response gradient, strength, biological plausibility).

Can the results be applied to the local population? The subjects covered in the study could be sufficiently different from your population to cause concern · Your local setting is likely to differ much from that of the study · Can you quantify the local benefits and harms?

Do the results of this study fit with other available evidence? Consider all the available evidence from RCT’s, systematic reviews, cohort studies and case-control studies as well for consistency.

Could there be confounding factors that haven’t been accounted for?

Are the results of this study directly applicable to the patient group targeted by this guideline?

Can the results be applied to your organization?

Conflicts of interest are declared?

Rate the overall methodological quality of the study, using the following as a guide:

High quality (++): Majority of criteria met. Little or no risk of bias.

Acceptable (+): Most criteria met. Some flaws in the study with an associated risk of bias.

Low quality (-): Either most criteria not met, or significant flaws relating to key aspects of study design.

Reject (0): Poor quality study with significant flaws. Wrong study type. Not relevant to guideline.