Home » The ToxLearn4EU Project » WP5: Risk assessment, policy and risk communication

Risk assessment, policy and risk communication

Learning goals

This document provides a comprehensive explanation of the stages involved in the human health risk assessment process (hazard identification and characterization, exposure assessment and risk characterization), while distinguishing it from the complementary fields of risk management and risk communication.

To set the foundation, the document begins by clarifying the critical distinction between hazard and risk. The concepts of toxicology and exposure —the two pillars of the risk assessment process—are analyzed in depth. This includes a detailed explanation of how health-based guidance values (HBGVs), such as the Acceptable Daily Intake (ADI), are derived from points of departure (POD) like the No-Observed-Adverse-Effect Level (NOAEL) and benchmark doses (BMD) obtained in preclinical studies.

Further, the document explores risk management approaches, including the application of tools like the Margin of Exposure (MOE), and emphasizes the importance of understanding risk perception for effective risk communication.

In addition, an overview is provided on how safety is assessed in key fields such as pharmaceuticals and food, offering practical insights into the application of risk assessment principles in these industries.

The document concludes with a comparison between human health risk assessment and ecotoxicological assessment.

Links to videos with lectures, problem-based learning materials (PBLs), and international organisms are provided all over the document. This document can be used as teaching material by instructors or as support material for learning by students.

INDEX

1. Introduction
1.1 The concept of risk
2. Risk Assessment
2.1. Hazard identification and characterization
2.1.1 Hazard identification
2.1.2 Hazard characterization and health-based guidance values (HBGV)
2.1.2.1 Hazard characterization
2.1.2.2. Health-based guidance values
2.1.2.3 In vitro studies in hazard characterization
2.2. Exposure assessment
2.3. Risk Characterization
2.3.1. High risk populations
2.3.2 Margin of exposure (MOE) approach
3.  Risk Management
4. Risk Communication
4.1 Risk perception
5. Risk assessment and safety assessment depending on the market authorization
6. Ecological risk assessment

1. Introduction

Any substance can induce toxic effects under certain circumstances. These circumstances may include the chemical form of the compound, the route of entry, or the dose, among others.

Risk assessment serves to evaluate whether, under specific circumstances, a compound, a xenobiotic, may be toxic to a specific population. Therefore, the ultimate goal of risk assessment is to estimate the probability that a xenobiotic will produce adverse effects under certain exposure conditions.

Risk assessment is a purely scientific stage and is one of the three pillars of risk analysis (Figure 1). The other two pillars correspond to risk management, which involves the application of regulatory and control policies, and risk communication, which should be interactive between scientists, managers, and the general public.

The ultimate goal of human health risk assessment is to protect human health. However, considering the production, presence, use, and disposal of many chemical compounds that may become environmental pollutants risks to living organisms in the environment or to the environment itself can also be studied. In this case, it is called environmental risk assessment.

The present teaching material is focused on human health, however it is important to note that the mere presence of potentially toxic xenobiotics in the environment -air, water, soil, and living organisms- constitutes a threat to human health if no maximum limits are set.

Indeed, nowadays the focus is to move to the One Health approach which recognizes the interconnection between human, animal, and environmental health. Although initially focused to zoonotic diseases, it also covers exposure to chemicals.

For more information on the One Health concept you can access the following video that details the collective efforts of five EU agencies to integrate human, animal, plant, and environmental health strategies in response to pressing challenges like climate change, disease outbreaks, and environmental pollution. The five EU agencies involved are the European Medicines Agency (EMA), the European Food Safety Authority (EFSA), the European Centre for Disease Prevention and Control (ECDC), the European Chemicals Agency (ECHA), and the European Environment Agency (EEA).

You can also access the website of WHO.

Figure 1: Framework of Risk Analysis according to WHO: risk assessment, risk management, and risk communication.

1.1 The concept of risk

There are two elements that determine the risk to a specific population: exposure and hazard.

Hazard refers to the toxic properties a specific xenobiotic possesses. For instance, it may exhibit significant liver toxicity, act as a neurotoxic agent, cause tumors, etc.

Exposure is what enables contact between the xenobiotic and humans, and this can occur through different routes, as explained in the following section on exposure.

For risk to occur in a specific population, exposure to an agent capable of causing toxicity under those exposure conditions is necessary. The risk increases as exposure and hazard increase. Thus, risk can be simplified into an equation with two factors: exposure and hazard.

Risk = Hazard x Exposure

If exposure is zero, even if the compound is highly hazardous, the risk -the probability of an adverse effect – will be zero or very low. Conversely, if the compound is almost innocuous -requiring very high concentrations to produce an adverse effect- the risk will also be very low because it will be unlikely to reach toxic exposure levels.

Let’s see some specific examples to understand the concept better:

  • Carbon monoxide, for example, is a highly toxic gas and thus very hazardous. Its mere presence constitutes a hazard because even low concentrations can be fatal to humans. However, if appropriate safety measures are taken to avoid exposure to this gas, the risk is very low.
  • Amanitin, a toxin produced by the poisonous mushroom Amanita phalloides, is highly hazardous, causing liver toxicity that can lead to death or require a liver transplant. The risk of poisoning in some countries, particularly in areas where people are accustomed to foraging for mushrooms, is not zero, as the lack of knowledge and recklessness of some gatherers cause acute poisonings each year.

2. Risk Assessment

As mentioned, risk assessment involves estimating the likelihood of adverse effects occurring under certain exposure conditions. The process consists of four phases:
  1. Hazard Identification: Study of the adverse effects of a xenobiotic.
  2. Hazard Characterization: Evaluation of the dose-response relationship to determine the relationship between exposure levels and the incidence or severity of adverse effects.
  3. Exposure Assessment: Determining the concentrations/doses to which human populations are exposed.
  4. Risk Characterization: Integrating all previous data to estimate the incidence and severity of adverse effects likely to occur in an exposed population.

2.1. Hazard identification and characterization

2.1.1 Hazard identification

Hazard identification is generally the first step in a risk assessment and is the process used to identify the specific chemical hazard and to determine whether exposure to this chemical has the potential to harm human health.

The approach recommended by the World Health Organization WHO is to establish the identity of the chemical of interest and to determine whether the chemical has been considered hazardous by international organizations and, if so, to what degree.

The process for gathering information in support of hazard identification proposed by WHO is illustrated in Figure 2.

Figure 2. Generic road map for hazard identification proposed by WHO.
As an example, this preliminary information can be gathered from:
a) A wide variety of toxicity assays using different experimental systems to gather information that progressively defines the type of toxicity a particular xenobiotic can cause and the conditions under which it occurs. These methodological tools will be explained throughout this text, especially in the hazard characterization, as well as the toxicity parameters obtained and used for risk assessment and/or preventive measures. The advantage of these toxicity studies is that they are conducted under controlled conditions, and the design always allows for the establishment of a dose/concentration-response/effect relationship. The disadvantage is that these studies can only be conducted on animal models or in vitro or ex vivo experimental systems, and their extrapolation to humans is necessary.
b) Data on humans can only be obtained from accidental poisonings—typically acute and with unknown doses—from clinical trials during drug development, or from epidemiological studies where both exposure and toxicity are estimated, often with a high degree of uncertainty.
c) Computational tools have also been developed to predict the toxicity of new compounds based on known data from other substances. This is a promising field in development, but for now, it is mainly used as an initial screening tool as it does not replace a complete toxicological characterization through toxicity assays, which are required by regulatory agencies in various fields.

 

2.1.2 Hazard characterization and health-based guidance values (HBGV)

2.1.2.1 Hazard characterization

Hazard characterization is the qualitative and/or quantitative assessment of the potential of a chemical to cause adverse health effects as a result of exposure.

According to WHO “an adverse effect is defined as a change in the morphology, physiology, growth, development, reproduction or lifespan of an organism, system or (sub)population (or their progeny) that results in an impairment of functional capacity, an impairment of the capacity to compensate for additional stress or an increase in susceptibility to other influences. To discriminate between adverse and non-adverse effects, consideration should be given to whether the observed effect is an adaptive or trivial response, transient or reversible, of minor magnitude or frequency, a specific response of an organ or system, or secondary to general toxicity”.

Hazard characterization is therefore the toxicological evaluation of the compound.

In experimental toxicology, in vitro and in vivo models can be used for hazard characterization. Whether usingin vivo or in vitro models, deciding the doses or concentrations to be tested and preparing them are key aspects of the experimental design. This allows for an unequivocal relationship between a given dose or concentration and a toxic effect. In these situations, exposure is a controlled factor with no uncertainties. Toxicology studies typically involve administering a known and chosen dose of a pure substance to an animal through a route of administration selected by the investigator. The exposure level is the chosen dose, which would be the independent variable. What is more difficult to quantify is the concentration of xenobiotic reaching each organ, which is the internal dose and is ultimately the most important. In all cases, a dose-response assessment must be conducted, therefore there is the need of testing at least three different doses or concentrations in each experiment.

In an in vitro experimental system, the situation is even simpler. Cells are exposed to a specific concentration of xenobiotic in the culture medium in which they are grown. The exposure concentrations are also determined during the experimental design. Even in this case, the amount of xenobiotic absorbed by the cells can vary depending on the concentration, the number of cells in the culture, and the cells’ absorption system, which may involve simple diffusion, active transport, or pinocytosis. For more information on cell culture techniques you can visit the following project website (RALDE) where you can find an online practical and a gameregarding cell culture technique.

Toxicological studies include a battery of tests conducted in in vitro, in vivo, and increasingly in silico models, which allow for the toxicological characterization of new compounds. In general, toxicity assessment methods can be classified into two broad categories: general toxicity tests and specific toxicity tests (Table 1).

The first category, general toxicity studies, encompasses all tests designed to evaluate any type of effect on the functional or morphological aspects of various systems or organs. Within this category, tests differ in terms of the total duration of the study and the depth with which general toxicity in animals is critically evaluated.

The second category of tests includes those specifically designed to evaluate particular types of toxicity in detail. Repeated-dose toxicity tests do not detect all forms of toxicity but can reveal some toxic effects and highlight the need for more specific studies. Additionally, the intended use of the compound in question may require assessing its safety concerning certain specific forms of toxicity, such as skin and eye irritation tests for cosmetics.

Table 1. Classification and objectives of toxicological studies/tests.

For more information, you can access the following lecture in the Advances Courses of the Tox4Learn project.

As mentioned, the dose-response evaluation is obtained from toxicity studies, but these face the challenge of extrapolation from in vitro or in vivo studies to humans. Three major unknowns arise when predicting a xenobiotic’s behavior in humans based on data obtained from animal toxicity studies:

1. Does it have the same effect at equivalent doses in humans as in animals (e.g., rats)? Two key aspects must be considered: a) absorption, distribution, metabolism, and excretion (ADME) processes of the compound should be similar in both species; b) the mechanism of action in the animal species should be relevant to humans. Indeed, in toxicological phenomena, three major phases are distinguished: exposure, toxicokinetics, and toxicodynamics. Absorption is the process through which the toxin enters the bloodstream and can be distributed throughout the body, leading to the transit processes of the xenobiotic in the body, or toxicokinetics (including metabolism, distribution and excretion). Toxicodynamics refers to the set of reactions that take place from the moment the toxin comes into contact with its biological receptor, which may be specific or non-specific; this is mainly related to the mechanism of action of the xenobiotic.

2. What happens at low doses? What about doses that humans are actually exposed to? Toxicity studies, by definition, must reach toxic doses, which are sometimes much higher than what is encountered in real human exposure situations.
  1. What about more sensitive populations, such as infants, pregnant women or elder populations? Toxicity studies do not account for these aspects. However, risk assessments must consider more vulnerable populations.

Epidemiological studies seek to obtain both exposure and toxicity data in humans and establish causal relationships using statistical tools that help correlate dose and response. However, epidemiological studies require a population to be exposed to the compound as it naturally occurs (not deliberately exposed as in clinical trials for example).

2.1.2.1 Health-based guidance values

In all cases the final aim of toxicological characterization is to carry out the aforementioned dose-response assessment focused on identifying the Point of Departure (POD) for health effects in critical studies. The POD is therefore the point on a dose–response curve established from experimental data used to derive a safe level.

Some of the most relevant POD are the No Observed Adverse Effect Level (NOAEL), Lowest Observed Adverse Effect Level (LOAEL) or the benchmark dose (BMD). They are critical concepts used to evaluate the safety and potential risks of chemical substances and are obtained in toxicological experiments.

A) No Observed Adverse Effect Level (NOAEL) and Lowest Observed Adverse Effect Level (LOAEL)

The NOAEL is defined as the highest dose or exposure level of a substance at which no adverse effects are observed in the exposed population, under defined study conditions. It is established through the previously toxicological studies mentioned where various dose levels are assessed. The NOAEL is an essential reference POD for determining safe exposure limits in human; healthbased guidance values, such as the Acceptable Daily Intake (ADI) or Reference Dose (RfD).

The LOAEL refers to the lowest dose or exposure level at which adverse effects are observed in the exposed population. When a NOAEL cannot be determined, the LOAEL is often used as the starting point for risk assessment.

A key Distinction between them is that the NOAEL represents the threshold below which adverse effects are not detected, while the LOAEL identifies the minimum dose where adverse effects begin to manifest.

Both parameters are fundamental for establishing safety margins and regulatory guidelines in the assessment of chemicals, drugs, and environmental toxins.

Both parameters are established through carefully designed toxicological studies aimed at identifying the highest dose of a substance that does not produce any significant adverse effects under the conditions of the study. Normally NOAEL/LOAEL used for establishing are derived from general toxicity studies (repeated dose studies), however in certain circumstances NOAEL/LOAEL can also be derived from specific toxicity studies (genotoxicity, carcinogenicity) or from epidemiological studies. The process involves several key steps (Table 2).

Table 2. Steps of for the NOAEL/LOAEL identification.

It is important to note that NOAEL (or LOAEL) are derived for each individual experiment.

 Below you can find an example of a NOAEL derived from a chronic toxicity study of a test compound:

  • Doses Administered: 0 mg/kg (control), 10 mg/kg, 50 mg/kg, and 100 mg/kg per day.
  • Observations: Liver toxicity was observed at doses of 50 mg/kg and higher. No adverse effects were observed at 10 mg/kg.
  • NOAEL: The NOAEL is determined to be 10 mg/kg (therefore the LOAEL is 50 mg/kg- however LOAEL in not used when a NOAEL is available)

B) Benchmark Dose Method

The benchmark dose method is a procedure used to obtain values based on a starting or reference dose (“benchmark”), without using NOAEL values. It involves fitting all available experimental data to a dose-response curve with specific confidence intervals. This is a very labor-intensive process that requires substantial computational tools. The aim is to find the dose-response curve model, mathematically characterized, that best fits the available data. In this way, unlike NOAEL-based approaches, a dose-response curve is obtained considering all or most of the experimental results. Additionally, the variability associated with the dose-response curve is quantified through confidence intervals (Figure 3).

A specific response level (benchmark response level) is then defined, typically 10%. Finally, the dose corresponding to the lower confidence limit for that response level is calculated, which becomes the benchmark dose.

For example, in a study examining the effect of a pesticide on liver toxicity. If a 10% increase in transaminases over the background level is chosen as the BMR:

  1. Dose-response data are collected across several dose levels and/or studies.
  2. Models are fit to the data, and the dose corresponding to the 10% increase is determined as the BMD.
The advantages of this method include: a) introducing a measure of variability, such as confidence intervals, into the dose-response curve, which, in turn, is obtained by considering many experiments rather than just one; b) calculating the benchmark dose within the range of observable experimental responses, without the need to extrapolate to lower doses.
Figure 3. A hypothetical dose-response association from an animal study with 5 dose groups. The lower limit on the BMDL or BMCL is selected as POD. Other potential PODs are the NOAEL or the LOAEL. The figure is adapted from CDC (NIOSH).

C) Derivation of Health-Based Guidance Values

When only animal data is available and it is necessary to protect the population from possible adverse effects, preventive measures are taken to minimize the risk. This involves recognizing the typical pattern of toxic effects of the compound, assessing the relevance of that toxic potential in humans, and implementing measures to reduce exposure—either by prohibiting or restricting the use of the compound or by controlling its presence in food, for example (see the section on risk management and communication).

There are at least two situations where such measures are advisable, due to the impossibility of conducting risk assessments based on human data: a) chemical products for which it is very unlikely to have human data available, and b) toxic effects that require extensive, long-term human exposure to gather data, such as carcinogenic or reproductive effects.

As mentioned before, NOAEL (or LOAEL) are derived for each individual experiment carried out for a certain chemical. Thus, when multiple toxicological studies are available, the process of selecting the most relevant NOAEL for human health risk assessment and deriving a health-based guidance value (HBGV) follows a structured and systematic approach (see Table 3). Normally the Weight of Evidence (WoE) approach is followed, by taking into account consistency across studies (repeated findings in multiple studies and species), study quality (higher weight is given to robust, well-designed studies) and outliers or conflicting data (these are carefully evaluated in the context of overall evidence).

This ensures that the assessment is robust, evidence-based, and protective of public health.

Table 3. Steps in the selection of the most relevant NOAEL/LOAEL.

Once POD is determined (NOAEL of the critical study or the BMD), the HBGV can be calculated.

Some of the most used HBGV are the Reference Dose (RfD) and the Acceptable/Tolerable Daily Intakes (ADI/TDI). The term Reference Concentration (RfC) is used for compound to which humans are exposed via inhalation. All of these terms define the daily intake of a chemical compound, which, over a lifetime, poses no appreciable health risk, based on the data available at the time.

Normally ADI is used for substances that need pre-market authorization such as additives and pesticides. In case, of contaminants the Tolerable daily intake (TDI) is the correct term. These two terms are generally used by World Health Organization (WHO), Food and Agriculture Organization (FAO) and the European Food Safety Authority (EFSA). The term RfD is preferred by the U.S Environmental Protection Agency.

Reference doses and ADIs are calculated from NOAEL or BMD values by dividing them by safety factors, also known as uncertainty factors, and modifying factors, which are not always applicable.

ADI or RfD = NOAEL or BMD / UNCERTAINTY FACTORS

 

Uncertain factors are used to address potential uncontrolled variations that lead to uncertainties in risk characterization.

Uncertainty factors generally range from 0 to 10; by default, a value of 10 is applied. A variable number of these factors may be used for different purposes:

  1. extrapolation from animal data to humans,
  2. accounting for a more sensitive population group,
  3. extrapolating results from short-term studies to chronic exposure,
  4. when only LOAEL data is available instead of NOAEL,
  5. to generally mitigate any experimental limitations.
  6. when knowledge of the compound’s mechanism of action and toxicokinetics or the relevance of the animal response to human health is not clear.