Skip Navigation

NCR Workshop: 3rd Review of the IRIS Process Meeting Agenda Now Available 

March 27-28, 2013
National Academy of Sciences, Lecture Room
2101 Constitution Ave., N.W.
Washington, DC 20418
PH: 202-334-1578 / FAX 202-334-2752

Agenda

This meeting will include a workshop on weight of evidence on March 27-28th that is open to public. However, there is limited space, and you will need to register in advance to reserve your seat. Please email your contact information to Craig Philip to register.

*If you are not able to attend the workshop in person we have secured a teleconference line for guests to listen in (Please note that presentations/slides will not be available for teleconference guests during the workshop). Please contact Craig Philip for teleconference information.

FIRST DAY OF WORKSHOP – MARCH 27, 2013

8:00
Welcome to Workshop
Jonathan Samet
Chair, Committee to Review the IRIS Process
Professor and Flora L. Thornton Chair, Department of Preventive Medicine
Keck School of Medicine, University of Southern California

ASSEMBLING THE EVIDENCE

This session will address approaches to identifying evidence on agents being considered in IRIS assessments. It will cover methods for searching literature and other data bases. The session will also consider the complicating issues of publication bias, “the grey literature,” selective publication of model results, and access to primary data. A further major set of topics include the use of systematic approaches for characterizing study quality, methods for qualitatively and quantitatively assessing heterogeneity across studies, and use of quantitative synthesis (meta-analysis). An additional topic, potentially relevant to some assessments, is whether all assessments need a comprehensive review of the literature.

8:15
Introduction and Overview of Session
Lisa Bero
Member, Committee to Review the IRIS Process
Professor, Department of Clinical Pharmacy
University of California, San Francisco

8:25
Systematic Review of Animal Studies and Approaches for Characterizing Study Quality
Malcolm MacLeod
Professor of Neurology and Translational Neuroscience
University of Edinburgh

8:40
Systematic Review of Human Studies and Approaches for Characterizing Study Quality
Karen Robinson
Associate Professor
Medicine, Epidemiology, and Health Policy and Management
Johns Hopkins Medical Institutions

8:55
Development, Maintenance, and Use of an Air Pollution Data Base
Richard Atkinson
Senior Lecturer in Epidemiology
St. George’s University of London

9:10
Panel Discussion with Speakers on Assembling the Evidence
Key Questions

(1) Do IRIS assessments necessarily require full systematic reviews? (2) How might assessment of risk of bias differ between studies of chemicals and studies of other interventions, such as drugs? (3) What are the implications of heterogeneity of findings for risk relationships? (4) What approaches should be used for assembling different types of evidence, such as epidemiological and toxicological? (5) How can mechanistic information be systematically identified?

10:10
Break

MECHANISM AND MODE OF ACTION

There is a pressing need to improve efficiency in the risk-assessment process and incorporate high-throughput technology in evaluating the potential health effects of chemicals. Several efforts are underway by EPA to improve chemical risk assessment. For example, EPA’s high-throughput testing program (ToxCast) is designed to identify chemicals with the greatest potential risk to human health. EPA’s IRIS program is charged with evaluating and integrating these and other multiple types of evidence regarding potential adverse effects of environmental contaminants on human health: mechanistic studies, animal bioassays, and human studies. This panel will discuss current and future use of data on mechanism and mode of action in weight-of-evidence considerations. Specific topics of interest are (a) evaluation of strength-of-evidence related to mechanisms, (b) the use and interpretation of high-throughput toxicity screening data, and (c) application of genomic dose-response data to chemical risk assessment. Consideration of application of mechanistic data to cancer and noncancer chemical risk assessment within IRIS assessments is of overarching interest.

10:30
Introduction and Overview of Session
David Dorman
Member, Committee to Review the IRIS Process
Professor of Toxicology, College of Veterinary Medicine
North Carolina State University

10:40
Use of High-Throughput and High-Data-Content Technologies in Chemical Risk Assessment
Rusty Thomas
Director, Institute for Chemical Safety Sciences
The Hamner Institutes for Health Sciences

11:00
Panel Discussion of High-Throughput Data for Determining Mechanism or Mode of Action

Panelists: David Schwartz, Chair of Medicine, Professor of Medicine and Immunology, University of Colorado; George Leikauf, Professor of Environmental and Occupational Health, Graduate School of Public Health, University of Pittsburgh; Rusty Thomas, Director, Institute for Chemical Safety Sciences, The Hamner Institutes for Health Sciences; Joe Rodricks, Principal, ENVIRON; and Thomas Hartung, Professor and Doerenkamp-Zbinden Chair for Evidence-based Toxicology,
Director Center for Alternatives to Animal Testing, Johns Hopkins Bloomberg School of Public Health

Key Questions

Topic 1: How will findings from new high-throughput assays be used? Can data from high-throughput assays replace more traditional apical end points that are examined in animal toxicity studies? How can dose-dependent changes in mechanisms identified from high-throughput assays be incorporated into chemical risk assessments? How can pharmacokinetic and similar data inform the interpretation of high-throughput screening assays?

Topic 2: How should mechanistic information be incorporated into IRIS assessments? How can the science be advanced to improve qualitative and quantitative application of mechanistic information? What are the evidence criteria for concluding that a mechanism is established as relevant to an agent and outcome?

12:00 Break for Lunch

INTEGRATION OF DATA

EPA’s IRIS program is charged with evaluating and integrating multiple types of evidence regarding potential effects of environmental contaminants on human health: mechanistic studies, animal bioassays, and human studies. Assessments are often challenging due to sparse evidence, the use of relatively high doses in experimental bioassays, unclear toxicological mechanisms of action, and unmeasured co-exposures and other threats to validity in observational designs. This session will address qualitative and quantitative strategies for integrating evidence of different types in human health risk assessments.

1:00
Introduction and Overview of Session
Scott Bartell
Member, Committee to Review the IRIS Process
Associate Professor, Program in Public Health
University of California, Irvine

1:10
Qualitative and Quantitative Methods for Integrating Evidence
Duncan Thomas
Professor and Verna Richter Chair in Cancer Research, Keck School of Medicine
University of Southern California

1:30
Panel Discussion on Integrating Various Data

Panelists: Steve Goodman, Professor of Medicine and Epidemiology, Stanford University; Kristina Thayer, Director, Office of Health Assessment and Translation, National Toxicology Program; Duncan Thomas, Professor and Verna Richter Chair in Cancer Research, Keck School of Medicine, University of Southern California; Tracey Woodruff, Professor and Director, Program on Reproductive Health and the Environment, University of California, San Francisco; and Lauren Zeise, Deputy Director for Scientific Affairs, Office of Environmental Health Hazard Assessment, California EPA

Key Questions

Topic 1: Hypothetical mechanisms or modes of action have been proposed for some toxicants, largely based on research in animal models. Consequently, it might be difficult to identify or exclude additional mechanisms for toxic effects in humans. Should mechanistic information be used in a qualitative manner, such as in Hill's biological “plausibility” criterion? Can information from observational or clinical studies on intermediate end points related to mechanisms be helpful? How can mechanistic understanding best be reflected in dose-response model selection or parameter estimation?

Topic 2: How should evidence of toxicity from high-dose animal studies be weighed against null findings from one or more epidemiologic studies at lower exposures? What level of epidemiologic evidence would be sufficient to dismiss a toxic effect in animals as irrelevant to humans? How can dose-response relationships be combined from different types of research, for example, animal bioassay and epidemiological?

Topic 3: Should positive epidemiologic studies with weaker designs (for example, ecological studies, or studies with unmeasured known confounders) or with positive but non-significant associations contribute to the weight of evidence, or should they be considered only as hypothesis generating?

2:30 Break

CAUSALITY

The IRIS assessments evaluate hazard, specifically whether the chemical of concern is a cause of one or more adverse outcomes. The goal of the causal criteria session is to consider the best methods available for systematically evaluating the evidence from individual studies with respect to whether, and to what degree, a chemical causes a particular health outcome, and for combining the evidence in individual studies into an overall judgment as to the likelihood of a causal relationship. Specific goals of the session include (1) considering the utility of existing causal criteria outlined in the most recent IRIS documents; (2) comparing causal assessment methods used by other national and international organizations, with the potential goals of elaborating new guidelines for assessing strength of evidence for causation and of achieving some harmonization across agencies; and (3) considering whether the Hill “criteria” are still useful as guides to synthesizing the overall evidence for causation, or whether alternative criteria or guidelines might be an improvement on approaches developed almost half a century ago.

3:00
Introduction and Overview of Session
Richard Scheines
Member, Committee to Review the IRIS Process
Professor and Head of Philosophy Department
Carnegie Mellon University

3:10
The Role of Mechanism in Causal Assessments and the State of Bradford-Hill
Steve Goodman
Professor of Medicine and Epidemiology
Stanford University

3:25
Application of Causal Methods to Assess Effects of Chemical Exposures in Practice
Lauren Zeise
Deputy Director for Scientific Affairs
Office of Environmental Health Hazard Assessment
California EPA

3:40
Comparing Weight-of-Evidence Frameworks for Causation
Lorenz Rhomberg
Principal
Gradient

3:55
Panel Discussion with Speakers on Causal Methods

Key Questions

Should the approach to causal inference within EPA guidelines be revised? Are the long-standing causal criteria still useful, given the range of evidence considered in IRIS assessments? How should causal judgments be made in practice? How can they be most useful for practitioners?

4:55
Opportunity for Public Comment

5:30 Adjourn for First Day of Workshop

SECOND DAY OF WORKSHOP – MARCH 28, 2013

8:00
Welcome to Concluding Session of Workshop
Jonathan Samet
Chair, Committee to Review the IRIS Process
Professor and Flora L. Thornton Chair, Department of Preventive Medicine
Keck School of Medicine, University of Southern California

8:15
Putting the Pieces Together: A Case Study
Tracey Woodruff
Professor and Director
Program on Reproductive Health and the Environment
University of California, San Francisco

8:45
Workshop Discussion: From Start to Finish – Systematic Review and Evidence Integration
Speakers, Panelists, and Committee Members

METHODS FOR CHARACTERIZING AND COMMUNICATING UNCERTAINTY

One of the primary aims of systematic reviews is to characterize and communicate the state-of-evidence on a specific topic. Absence of evidence and uncertainties may be characterized using different approaches that range from implicit characterization (qualitative discussion, unexplained variance) to explicit and quantitative characterization. In most cases, communicating uncertainty qualitatively or quantitatively should be an intrinsic element of such efforts. Numerical, verbal, and graphical tools are all widely used to characterize and communicate uncertainty, but with varying success. In this session, methods for characterizing and communicating uncertainties in the IRIS assessment will be considered.

9:15
Introduction and Overview of Session
Ann Bostrom
Member, Committee to Review the IRIS Process
Professor, Daniel J. Evans School of Public Affairs
University of Washington

9:25
Characterizing Uncertainty
Jay Kadane
Leonard J. Savage University Professor of Statistics, Emeritus
Carnegie Mellon University

9:45
How the Public Interprets Uncertainty Communication: Some Lessons from the IPCC
David Budescu
Anne Anastasi Professor of Psychometrics and Quantitative Psychology
Fordham University

10:00
Panel Discussion on Uncertainty

Panelists: Tim Lash, Professor, Rollins School of Public Health, Emory; Chris Frey, Distinguished University Professor, North Carolina State University; David Budescu, The Anne Anastasi Professor of Psychometrics and Quantitative Psychology, Fordham University; Jay Kadane , Leonard J. Savage University Professor of Statistics, Emeritus, Carnegie Mellon University; and Thomas Wallsten, Professor, Department of Psychology, University of Maryland

Key Questions

What approaches would enhance the consideration and presentation of uncertainty in IRIS assessment? What attributes of users and uses of IRIS should guide methods for characterizing uncertainties in IRIS assessments? What do we know about tools that are readily available for use in quantifying uncertainty in IRIS?

10:45 Break

USE OF EXPERT JUDGMENT

Expert judgment is used in systematic review processes and throughout IRIS assessments, as discussed in the earlier sessions at this workshop. Expert judgment is also used in risk analysis to fill gaps when data are unavailable. Although it is an inherent component of IRIS assessments, this has not been explicitly acknowledged. In this session, the use of expert judgment in the IRIS assessment will be considered, identifying those points in the review and assessment process where expert judgment is important. The session will consider processes for using expert judgment as discussed throughout the workshop in previous sessions and in risk assessment, including elicitation and Delphi approaches.

11:00
Introduction and Overview of Session
Ann Bostrom
Member, Committee to Review the IRIS Process
Professor, Daniel J. Evans School of Public Affairs
University of Washington

11:15
Panel Discussion on Expert Judgment

Panelists: Tim Lash, Professor, Rollins School of Public Health, Emory; Chris Frey, Distinguished University Professor, North Carolina State University; David Budescu, The Anne Anastasi Professor of Psychometrics and Quantitative Psychology, Fordham University; Jay Kadane , Leonard J. Savage University Professor of Statistics, Emeritus, Carnegie Mellon University; and Thomas Wallsten, Professor, Department of Psychology, University of Maryland

[NOTE: All invited workshop participants are urged to participate in this particular discussion.]

Suggested topics to address by the panel: (a) elicitation techniques (b) understanding the specificity of expertise and to what extent interdisciplinary expertise is required or possible, (c) opportunities (when and where) for the value of expert judgments in IRIS and (d) limitations (including expert bias) on the value of expert judgments in IRIS.

Key Questions

What are best practices for identifying appropriate expertise and eliciting expert judgments, what is the evidence for their effectiveness, and how could they inform the IRIS process? What types of biases in expert judgments might affect IRIS assessments, and how could these be mitigated?

12:15 Opportunity for Public Comment

12:30 Adjourn Workshop

See Website for more information.

New ALTEX: 1/2013

Altex_30_1.jpg
Support ALTWEB, Make a Gift
Online Humane Science Course

MeetingS

Workshop: Lessons Learned, Challenges, and Opportunities: The US Endocrine Disruptor Screening Program
April 23-24, 2013
Research Triangle Park, NC

Developing Microphysiological Systems for Use as Regulatory Tools- Challenges and Opportunities
May 10, 2013
Silver Spring, MD

Advances in In Vitro Cell and Tissue Culture
May 21-22, 2013
Liverpool, England

Joint US Workshop: Scientific Roadmap for the Future of Animal-Free Systemic Toxicity Testing

May 30-31, 2013

College Park, MD

International Congress of Toxicology
June 30-July 4, 2013
Seoul, Korea

EUSAAT's 2013 Congress and Symposium
September 13-18th, 2013
Linz, Austria

LATINFARMA 2013: 3Rs Alternatives in Pharmacology, Toxicology and Teaching Workshop
October 21-25, 2013
La Habana, Cuba

In Vitro Medical Device Testing Symposium
December 10-11, 2013
Baltimore, MD

More Meetings...

interest