PREVENTING OCCUPATIONAL DISEASE

Mini-symposium: Applying methods for bias impact assessment in occupational case-control and cohort studies for cancer hazard identification

Wednesday 8 October 2025, 11.00 – 12.30 Flash

Moderated by Mary Schubauer-Berigan

Cancer hazard identification framework and the role of bias assessment
Mary K. Schubauer-Berigan (presenter)

Rodolfo Saracci

Abstract

Objective: Describe the framework for cancer hazard identification and how bias assessment can be used to draw causal inferences from observational epidemiology studies in the International Agency for Research on Cancer (IARC). Methods: The framework for IARC Monographs cancer hazard identification and the role of observational studies of human cancer were reviewed. The history of false positive and false negative evaluations (based on re-evaluations) in the Monographs was evaluated. Past monographs were examined to select representative agents and summarize approaches Working Groups have used to rule out (or not) bias or confounding as explanations for positive findings. Results: IARC’s hazard identification uses principles of systematic review to conduct synthesis within and integration across evidence streams, with primary attention given to avoidance and management of conflicts of interest. Only 1 of ~100 agents (human papillomavirus type 66) moved from sufficient (to limited) evidence for cancer in humans upon re-evaluation. Of the 15 agents that moved between limited and inadequate evidence, 14 increased in classification and only 1 (coffee) decreased. In the four selected examples (night shift work, red meat consumption, radiofrequency electromagnetic field radiation, and opium consumption), a variety of approaches, including use of directed acyclic graphs, indirect adjustments, and worst-case adjustments, were used to assess uncontrolled confounding, information bias, and selection bias, but there was a lack of common tools described for such analyses in the Monographs in the context of hazard identification. Conclusions: The IARC Monographs programme uses a replicable, robust framework for cancer hazard identification, which could benefit from the availability of more tools for bias assessment.

Confounding appraisal in case–control and cohort studies
Kyle Steenland (presenter)

the IARC confounding bias group: David B Richardson; Sadie Costello, Sander Greenland, Jay S Kaufman, Kaitlin Kelly-Reif, Kyle Steenland, and Eric J Tchetgen Tchetgen

Abstract

Objective. Present methods to detect and evaluate confounding bias in epidemiologic studies. Methods. Confounding is bias that arises when the exposure and outcome share a common cause. Confounding can lead to spurious associations away from or toward the null. Potential confounders can be identified a priori by looking at the literature, to see what others have identified as confounders. Confounders must not be confused with intermediate variables which are on the pathway between exposure and disease. Results. Once identified, potential confounders can be controlled in the study design by matching (in either case–control or cohort studies), by stratifying on them in the analysis, or by their inclusion in a model where the outcome is regressed on exposure and potential confounders. Reviewers will need to assess how well potential confounders have been controlled. If there is likely uncontrolled confounding, a reviewer should assess the direction and magnitude of possible confounding bias. This can be done by 1) using negative control outcomes or negative control exposures, 2) triangulation, ie., considering different types of evidence from studies with different designs, or 3) quantitative bias adjustment, using a priori knowledge of the effect of a confounder on outcomes, as well as the likely prevalence of the exposure among exposed and non-exposed, to judge the likely effect of the confounder. Conclusion. In some studies, investigators will have measured potential confounders and controlled for them in the design or analysis. Reviewers will want to consider whether residual confounding is likely to be present. If there are potentially important uncontrolled confounders, reviewers will want to assess the likely impact of such confounding.

Information Bias: Misclassification and Mismeasurement of Exposure and Disease
Leslie Thomas Stayner (presenter)

the IARC information bias group: Leslie Stayner, Neil Pearce, Ellen Aagaard Nøhr, Laura Beane Freeman, Veronika Deffner, Pietro Ferrari, Laurence S. Freedman, Manolis Kogevinas, Hans Kromhout, Sarah Lewis, Richard MacLehose, Marie-Elise Parent, Lorenzo Richiardi, Pamela Shaw, and Roland Wedekind

Abstract

Objective: To present methods for evaluating bias related to misclassification and mismeasurement of exposure. Methods: Potential bias due to misclassification and mismeasurements of exposures and disease are a serious concern in nearly all epidemiologic studies. This presentation will describe methods that may be used to evaluate the direction and magnitude of the bias. Results: Errors in exposure may be differential or non-differential with respect to disease. It is commonly assumed that non-differential errors in exposure bias results towards the null. However, the direction of the bias is determined by the type of exposure metric and the error model. Non-differential errors of binary exposure variables are expected to bias results towards the null, but this is only an expectation, and results may be biased in either direction. Non-differential exposure misclassification using several categories of exposure may result in inflated estimates in intermediate categories, and underestimates of risk in the highest category. Errors in continuous exposure measures are expected to bias the results towards the null when they are “Classical” and are not expected to bias study findings when they are “Berksonian” . Methods are available for evaluating the potential direction and magnitude of bias from misclassification and mismeasurement errors using validation data. Alternatively, one can conduct sensitivity analyses with assumptions about sensitivity and specificity. These methods can be extended to analyses that are multl-dimensional, and probabilistic. Conclusion: Misclassification and mismeasurement of exposure is common in epidemiology research. Methods are available to evaluate the direction and magnitude of potential biases resulting from these errors.

Selection bias for observational studies in hazard identification
Neil Pearce (presenter)

of the IARC selection bias group: Neil Pearce, Laura Beane Freeman, Manolis Kogevinas, Richard MacLehose, Ellen Aagaard Nohr, Marie-Elise Parent, Lorenzo Richiardi

Abstract

Objective: Present methods to detect and evaluate selection bias in epidemiologic studies. Methods: In epidemiological studies, there is usually a target population to which we wish to draw inferences. The study is then conducted in a source population followed over a clearly-defined period of time (the risk period). There is also a (perhaps smaller) study population, i.e. the group of people who actually take part in the study. Selection bias refers to differences between the source population and the study population. In cohort studies, important mechanisms for selection bias include non-response at baseline, loss-to-follow-up, left truncation, right truncation, and immortal person-time bias. In case-control studies, all of these biases are possible, but bias can also occur due to inappropriate selection of controls. Results: Qualitative methods for assessing the existence, direction, and strength of selection bias, include the use of negative control exposures, negative control outcomes, ad hoc re-analyses of published data, comparisons with external data, and the use of multiple control groups. Quantitative methods also exist for sensitivity analyses in which “adjustments” are made for hypothesized selection bias. Conclusions: One of the primary questions posed to reviewers involved in an IARC Monograph is, “Can we reasonably rule out selection bias as an explanation for an observed exposure-cancer association?” This can be particularly difficult to assess, since most published studies have little or no discussion of the potential for selection bias. It is therefore important that authors, and editors, are encouraged to report the information that is required for a valid assessment of the potential, direction and strength of possible selection bias.

Appraising bias with access to original data
Irina Guseva Canu (presenter)

the IARC Study Reporting Considerations Group: Lin Fritschi, Terry Boyle, Brigid M. Lynch, Scott Weichenthal, and Irina Guseva Canu

Abstract

Objective: To provide researchers with clear guidance on reporting key information to support bias assessment. Methods: Researchers frequently work within the constraints of existing data and fixed study designs, whether analyzing completed studies, exploring new hypotheses with established datasets, or combining data from multiple studies of varying quality. In such contexts, it is crucial to assess potential confounding, information bias (e.g., measurement error or misclassification), and selection bias—either by the study team or by systematic reviewers and hazard assessors. Results: Directed acyclic graphs can help identify potential biases, but appropriate reporting is essential for meaningful assessment. For confounding, important data include the identification of negative control outcomes and exposures that are related to the confounder, exposure period, confounder prevalence across groups, and its association with the exposure. For information bias, researchers should report the sensitivity and specificity of exposure measurements, validity of exposure and outcome measurement, along with relevant references, interview quality by case or control status and both unadjusted and adjusted risk estimates. For selection bias, useful information includes definitions and distributions of participants and non-participants (e.g., cases and controls), loss to follow-up rates in key subgroups, baseline exposure status, and proportions of participants with prevalent exposures at baseline or time zero. When such data are available in publications, quantitative bias analyses become feasible and substantially easier. Conclusion: Researchers with access to individual-level data are encouraged to report the necessary information to enable robust bias assessment. This will support quantitative bias analysis and facilitate the inclusion of study findings in systematic reviews and hazard identification efforts.

Incorporating bias appraisal into evidence synthesis: examples from recent IARC Monographs evaluations
Mary K. Schubauer-Berigan (presenter)

Andrew Kunzmann, Elisa Pasqual

Abstract

Objective: Summarize the framework for incorporating bias appraisal for observational epidemiology studies into evidence synthesis for cancer hazard identification, as recommended in the recently published IARC volume. Describe how bias assessment has been incorporated into recent Monographs evaluations. Methods: The recommended framework for incorporating bias assessments into evidence synthesis focuses on identification of key biases among informative studies, including direction and (where possible) magnitude, assessing their impact on study estimates, and triangulating findings among studies with different sources of bias. Scenarios are described for situations with few vs. many informative studies. We examined use of the bias assessment approaches laid out in the book for confounding, information bias, and selection bias in Monographs Volumes 131–138 (evaluations conducted in 2022–2025). Results: Each of the eight volumes considered has incorporated bias assessment tools into the evidence synthesis. For example, in an evaluation of antimony (Vol. 131), an assessment of bias resulting from co-exposure to arsenic among smelter workers led to a conclusion that arsenic could not entirely explain the observed lung cancer risk. In evaluations of occupational exposure as a firefighter (Vol. 132) and several pharmaceuticals (Vol. 137), meta-analyses stratified by major sources of bias were used in evidence triangulation. In the evaluation of PFOA (Vol. 135), information bias from single-timepoint exposure measurement was quantified using repeated-measurement data. For acrylonitrile (Vol. 136), externally conducted bias adjustment for healthy worker survivor bias was important for the determination of sufficient evidence for lung cancer. Conclusions: Incorporation of bias appraisal tools into evidence synthesis has strengthened recent IARC Monographs cancer hazard identification, giving tools to inform and complement Working Groups’ expert judgment. Acknowledgements: The framework for incorporating bias assessments into evidence synthesis is from a chapter co-authored by Amy Berrington de González, Nathan DeBono, Alexander Keil, Deborah Lawlor, Ruth Lunn, and David Savitz.