One-hundred years of toxicological risk assessment
A century of extraordinary progress in the worlds of pure and applied chemistry was, with the publication in 1962 of Rachel Carson’s Silent Spring, suddenly called into question. Carson was by no means the first to raise alarms about the new kind of environment chemists had been creating since the introduction of structural theory in the late 1850s and the new science of chemical synthesis that quickly followed. But she was the first to do so in a way that resonated deeply with the public, and that powerfully activated their political representatives.
The decade of political activism following the publication of Silent Spring led, in the United States, to the enactment of major laws giving authority to newly established regulatory agencies to set legally enforceable limits on human exposures to chemicals in the workplace and contaminating the general environment (1,2). These laws added to a host of laws that had been enacted in the pre-Silent Spring era, requiring similar controls on pesticides (the principle but not the only subject of Carson's concerns), and regulation of chemicals used in foods, drugs, and consumer products (2). The requirements set out in the new environmental and occupational health laws of the 1970s and 1980s, and increasing public pressure on food, drug, consumer product, and pesticide regulators, forced attention on a problem that public health and regulatory officials had been struggling with since passage of the first federal law mandating protection of people from unsafe exposures to the products of the new chemical industry: the Pure Food and Drug Act of 1906 (2).
The decade of political activism following the publication of Silent Spring led, in the United States, to the enactment of major laws giving authority to newly established regulatory agencies to set legally enforceable limits on human exposures to chemicals in the workplace and contaminating the general environment (1,2). These laws added to a host of laws that had been enacted in the pre-Silent Spring era, requiring similar controls on pesticides (the principle but not the only subject of Carson's concerns), and regulation of chemicals used in foods, drugs, and consumer products (2). The requirements set out in the new environmental and occupational health laws of the 1970s and 1980s, and increasing public pressure on food, drug, consumer product, and pesticide regulators, forced attention on a problem that public health and regulatory officials had been struggling with since passage of the first federal law mandating protection of people from unsafe exposures to the products of the new chemical industry: the Pure Food and Drug Act of 1906 (2).
The problem of identifying safe levels of human exposure to chemicals exhibiting toxicity (which virtually all chemicals will do at sufficiently high doses) could, after the 1970s, no longer be a matter of judgments by experts made out of public view, as it had been for the previous half-century. In the rest of this paper I will sketch out the history of our struggle with both the concept of safety, and with finding scientifically acceptable ways to apply the concept to ensure public health protection.
|
That chemicals could cause harm to health had, by the opening of the 20th century, been well established (3). Since at least the 16th-century, following the famous insight of the Swiss pharmacologist, Paracelsus, that “all substances are poisons: only the dose separates a poison from a remedy”, the importance of dose had been recognized. But until at least the 1950s little attention had been paid to the problem of identifying doses that were likely not to produce toxicity in a large and highly diverse human population. A number of efforts by public health officials during the first half of the 20th century led to the imposition of exposure limits on newly discovered contaminants in drinking water and foods, and recommendations for workplace exposure limits (4). Significant air pollution problems emerged in the 1940s and 1950s in heavily industrialized urban areas, leading to widespread recognition of the need to control certain air emissions so that human health would not be put at risk. But none of these efforts and concerns was, until the 1950s, based on any explicitly described methodology for establishing safe levels of exposure to chemicals.
By the 1950s, the problem was becoming significant, because the chemical and allied industries had begun to expand at a great pace and because air pollution related to the use of fossil fuels for energy production was beginning to skyrocket.
New laws relating to substances added to foods were passed in the 1950s, and these laws required that the Food and Drug Administration (FDA) decide whether substances added to food could be considered safe. Two leading FDA scientists, Arnold Lehman and O. Garth Fitzhugh, published a procedure that the agency would follow to make such a determination (5). The two scientists recognized that toxicity information from animal studies (the use of animals for such purpose had begun to be studied in the early years of the 20th century) would often have to be the basis for establishing safe doses, because human data were either not available (as in the case of a newly introduced substance) or difficult to obtain. The FDA scientists offered two important insights. First, experimental animals could be expected to be less susceptible to a chemical’s toxic effects than would some members of a highly diverse human population. Second, variability in response to a toxic insult was likely to be large in the human population. Lehman and Fitzhugh introduced two “safety factors”, each of magnitude 10, to account for these two important limitations in what could be learned from experimental toxicity studies. Thus came in the mid-1950s the Acceptable Daily Intake (ADI), established at 1/100 the maximum dose at which no adverse effect had been observed in animal testing (3). The ADI would become the safe dose for humans. If a food additive was to be present in food at levels that, when the food was consumed, lead to human intakes less than the ADI established for that additive, the additive could be considered safe and approved for use.
Since then, much work has been done to refine the method used to derive ADIs, and the Environmental Protection Agency (EPA) has chosen to use the term Toxicity Reference Dose (RfD) instead of ADI. Nevertheless, the approach laid out by Lehman and Fitzhugh remained in place, and is still used widely to establish limits on human exposures to chemicals (6).
The approach is, of course, based on the notion, implicit in the Paracelsus dictum, that certain “threshold” doses must be exceeded before a chemical’s harmful properties can manifest themselves. The ADI and RfD are, in effect, assumed to be approximate threshold doses for large and highly diverse human populations. But, given the way they are derived, it is not at all clear that every member of the population is protected at exposures that do not exceed the ADI or RfD. Put another way, this approach provides no information regarding the fraction of the population that might be affected ‒ might be at risk ‒ at doses greater or less than the ADI or RfD. Thus, although the ADI/RfD approach is simple and has found much acceptance for decision-making, it is not truly a risk-based approach, that is, one based on an analysis of risk and a policy decision regarding the magnitude of risk that can be tolerated in different circumstances (7).
Risk analysis and risk-based decision-making was not introduced into the area of chemical toxicity until the 1970s, and until recently had little effect on the safety assessment approaches we have been describing.
During the 1970s several authors began to emphasize the concept that safety is not an absolute (7). If, for example, safety is defined as a condition of no risk, then it is, except in trivial circumstances, an empty concept, because such a condition can never be known. Those who espouse the use of the ADI or RfD for decision-making emphasize that it is not a sharp line between safe and unsafe exposures, but it is often used as if it were. New thinking on this matter has emerged and will be mentioned near the end of this paper.
The emergence of risk-based decision-making during the 1970s was in part prompted by the problem of carcinogenic chemicals (1). At least as early as the 1940s, many scientists and public-health officials had concluded that the behaviour of carcinogens was radically different from that of chemicals causing other forms of toxicity. The threshold concept was said not to apply to these especially dangerous agents, and “no safe level” was the official view of how they should be considered for decision-making purposes. Lehman and Fitzhugh asserted that ADIs should not be developed for carcinogens (5). During the development of new food law’s during the 1950s, the U.S. Congress included the Delaney Amendment, requiring a strict prohibition on the intentional addition to food of any substance shown to be a carcinogen (2).
By the mid-1970s it had become clear that a relatively large number of chemical contaminants of the environment had carcinogenic properties and that some means had to be found to limit human exposures to them, because, unlike food additives or other intentionally introduced substances, there was no direct or simple way completely to eliminate human exposures. A number of proposals from the scientific literature led to the adoption in the mid-1970s, by both the EPA and FDA, of methods to estimate low-dose risks for carcinogens. These methods assumed the absence of identifiable threshold doses for carcinogens and a linear relationship between dose and risk (3). Regulation would involve decisions regarding the magnitude of carcinogenic risk to be tolerated in different circumstances. A completely new, risk-based, approach to safety decisions was born. Safety was now seen, as it properly should, not as a matter of pure science, but as a policy choice involving decisions about risk toleration.
These ideas and their regulatory applications were controversial, and the U.S. Congress asked the U.S. National Academy of Sciences (NAS) to review the new risk-based approaches. A committee of the NAS issued, in 1983, a report (Risk Assessment in the Federal Government: Managing the Process – the “Red Book”) that endorsed the new approach and that offered perspective and guidance on risk assessment and management.1 The report became highly influential and set the stage for the following 30 years of chemical regulation. The power of the report's recommendations remains in effect today.
From the time of the issuance of the Red Book to the present, the science of risk assessment has taken on greater rigor, including the incorporation of methods based on knowledge of biological processes underlying the production of carcinogenicity or other forms of toxicity. Its use has expanded from chemical to other types of hazards, including microbial pathogens8 and nutrients (9). Greater clarity regarding the risk assessment/management distinction has also emerged.
Yet, controversy over both science and policy has accompanied these expanded and improved approaches, in part related to the complexities of the science underlying risk assessment, and in part related to the question of how risk assessments should inform decision-making. Another study by the National Research Council, issued in 2009 under the title Science and Decisions: Advancing Risk Assessment (10) has taken on these questions. The report is comprehensive with respect to both the science and applications of risk assessment and points the way to the future. It offers many recommendations and a new framework for decision-making that, if implemented, would do much to increase the utility of risk assessment.
The report also returns to the matter of safety assessments, as they have been traditionally undertaken. The authors show that a unified approach to risk assessment, based on a chemical’s mode of toxic or carcinogenic action, can be applied. Quantifying risks for all toxicity endpoints is feasible, and, if implemented, can improve decisions. Within the new decision framework, the magnitude of risk reduction achieved with different approaches to managing risks can be understood for all forms of toxicity. With such understanding comes the ability to identify the most cost-effective approach to risk management. Such understanding cannot be achieved using RfDs as the guide. Furthermore, “safety” can be defined quantitatively, as the dose associated with a specified level of (presumably very small) risk. Whether and to what extent regulators move toward such a heavier reliance on quantitative assessments for public health decisions is unknown, but their decisions will be improved if they do.
To close where we began: perhaps Rachel Carson's most significant contribution to the whole enterprise of risk analysis relates to the fact that she was a superb communicator, and forced public attention on what is now universally acknowledged as a necessary component of risk-based decision-making. Decisions to protect public health can no longer be taken without full disclosure of their bases. All public health officials and manufacturers and users of chemicals have, thanks in great part to the influence of Silent Spring, the important responsibility of communicating to the public why their chemical risk management decisions ensure public health ‒ and also environmental ‒ protection.
References
1. National Research Council (1983), Risk Assessment in The Federal Government: Managing the Process, National Academy Press, Washington, D.C.
2. Merrill, R. (2001), Regulatory Toxicology, in Casarett & Doull's Toxicology, 6th edn., McGraw Hill, New York.
3. Rodricks, J. V. (2007), Calculated Risks. The Toxicity and Human Health Risks of Chemicals in our Environment, 2nd edn., Cambridge University Press, Cambridge, UK.
4. National Research Council (1977), Drinking Water and Health, Vol. 1, National Academy Press, Washington, D.C.
5. Lehman, A. J., et al. (1955), Procedures for the appraisal of the toxicity of chemicals in foods, drugs, and cosmetics. Food Drug Cosmetic Law Journal, 10, 679-748.
6. Dourson, M. and Patterson, J. (2003), A 20-Year Perspective on the Development of Non-Cancer Risk Assessment Methods, Journal of Human and Ecological Risk Assessment, 9, 1239-1252.
7. Lowrance, W.W. (1976), Of Acceptable Risk: Science and the Determination of Safety, William Kaufmann, Inc., Altos, G.A.
8. World Health Organization (2002), Risk assessments for Salmonella in eggs and broiler chickens, Geneva.
9. Rodricks, J. V. (2003), Approaches to risk assessment for macronutrients and amino acids. Journal of Nutrition, 133, 6 (Suppl.).
10. National Research Council 2009, Science and Decisions: Advancing. Risk Assessment. National Academy Press. Washington, D.C.
This article is based on a presentation by Dr Rodricks at the RSC ECG & Toxicology Group meeting, The legacy of Rachel Carson, held at Burlington House on 2nd October 2012.
By the 1950s, the problem was becoming significant, because the chemical and allied industries had begun to expand at a great pace and because air pollution related to the use of fossil fuels for energy production was beginning to skyrocket.
New laws relating to substances added to foods were passed in the 1950s, and these laws required that the Food and Drug Administration (FDA) decide whether substances added to food could be considered safe. Two leading FDA scientists, Arnold Lehman and O. Garth Fitzhugh, published a procedure that the agency would follow to make such a determination (5). The two scientists recognized that toxicity information from animal studies (the use of animals for such purpose had begun to be studied in the early years of the 20th century) would often have to be the basis for establishing safe doses, because human data were either not available (as in the case of a newly introduced substance) or difficult to obtain. The FDA scientists offered two important insights. First, experimental animals could be expected to be less susceptible to a chemical’s toxic effects than would some members of a highly diverse human population. Second, variability in response to a toxic insult was likely to be large in the human population. Lehman and Fitzhugh introduced two “safety factors”, each of magnitude 10, to account for these two important limitations in what could be learned from experimental toxicity studies. Thus came in the mid-1950s the Acceptable Daily Intake (ADI), established at 1/100 the maximum dose at which no adverse effect had been observed in animal testing (3). The ADI would become the safe dose for humans. If a food additive was to be present in food at levels that, when the food was consumed, lead to human intakes less than the ADI established for that additive, the additive could be considered safe and approved for use.
Since then, much work has been done to refine the method used to derive ADIs, and the Environmental Protection Agency (EPA) has chosen to use the term Toxicity Reference Dose (RfD) instead of ADI. Nevertheless, the approach laid out by Lehman and Fitzhugh remained in place, and is still used widely to establish limits on human exposures to chemicals (6).
The approach is, of course, based on the notion, implicit in the Paracelsus dictum, that certain “threshold” doses must be exceeded before a chemical’s harmful properties can manifest themselves. The ADI and RfD are, in effect, assumed to be approximate threshold doses for large and highly diverse human populations. But, given the way they are derived, it is not at all clear that every member of the population is protected at exposures that do not exceed the ADI or RfD. Put another way, this approach provides no information regarding the fraction of the population that might be affected ‒ might be at risk ‒ at doses greater or less than the ADI or RfD. Thus, although the ADI/RfD approach is simple and has found much acceptance for decision-making, it is not truly a risk-based approach, that is, one based on an analysis of risk and a policy decision regarding the magnitude of risk that can be tolerated in different circumstances (7).
Risk analysis and risk-based decision-making was not introduced into the area of chemical toxicity until the 1970s, and until recently had little effect on the safety assessment approaches we have been describing.
During the 1970s several authors began to emphasize the concept that safety is not an absolute (7). If, for example, safety is defined as a condition of no risk, then it is, except in trivial circumstances, an empty concept, because such a condition can never be known. Those who espouse the use of the ADI or RfD for decision-making emphasize that it is not a sharp line between safe and unsafe exposures, but it is often used as if it were. New thinking on this matter has emerged and will be mentioned near the end of this paper.
The emergence of risk-based decision-making during the 1970s was in part prompted by the problem of carcinogenic chemicals (1). At least as early as the 1940s, many scientists and public-health officials had concluded that the behaviour of carcinogens was radically different from that of chemicals causing other forms of toxicity. The threshold concept was said not to apply to these especially dangerous agents, and “no safe level” was the official view of how they should be considered for decision-making purposes. Lehman and Fitzhugh asserted that ADIs should not be developed for carcinogens (5). During the development of new food law’s during the 1950s, the U.S. Congress included the Delaney Amendment, requiring a strict prohibition on the intentional addition to food of any substance shown to be a carcinogen (2).
By the mid-1970s it had become clear that a relatively large number of chemical contaminants of the environment had carcinogenic properties and that some means had to be found to limit human exposures to them, because, unlike food additives or other intentionally introduced substances, there was no direct or simple way completely to eliminate human exposures. A number of proposals from the scientific literature led to the adoption in the mid-1970s, by both the EPA and FDA, of methods to estimate low-dose risks for carcinogens. These methods assumed the absence of identifiable threshold doses for carcinogens and a linear relationship between dose and risk (3). Regulation would involve decisions regarding the magnitude of carcinogenic risk to be tolerated in different circumstances. A completely new, risk-based, approach to safety decisions was born. Safety was now seen, as it properly should, not as a matter of pure science, but as a policy choice involving decisions about risk toleration.
These ideas and their regulatory applications were controversial, and the U.S. Congress asked the U.S. National Academy of Sciences (NAS) to review the new risk-based approaches. A committee of the NAS issued, in 1983, a report (Risk Assessment in the Federal Government: Managing the Process – the “Red Book”) that endorsed the new approach and that offered perspective and guidance on risk assessment and management.1 The report became highly influential and set the stage for the following 30 years of chemical regulation. The power of the report's recommendations remains in effect today.
From the time of the issuance of the Red Book to the present, the science of risk assessment has taken on greater rigor, including the incorporation of methods based on knowledge of biological processes underlying the production of carcinogenicity or other forms of toxicity. Its use has expanded from chemical to other types of hazards, including microbial pathogens8 and nutrients (9). Greater clarity regarding the risk assessment/management distinction has also emerged.
Yet, controversy over both science and policy has accompanied these expanded and improved approaches, in part related to the complexities of the science underlying risk assessment, and in part related to the question of how risk assessments should inform decision-making. Another study by the National Research Council, issued in 2009 under the title Science and Decisions: Advancing Risk Assessment (10) has taken on these questions. The report is comprehensive with respect to both the science and applications of risk assessment and points the way to the future. It offers many recommendations and a new framework for decision-making that, if implemented, would do much to increase the utility of risk assessment.
The report also returns to the matter of safety assessments, as they have been traditionally undertaken. The authors show that a unified approach to risk assessment, based on a chemical’s mode of toxic or carcinogenic action, can be applied. Quantifying risks for all toxicity endpoints is feasible, and, if implemented, can improve decisions. Within the new decision framework, the magnitude of risk reduction achieved with different approaches to managing risks can be understood for all forms of toxicity. With such understanding comes the ability to identify the most cost-effective approach to risk management. Such understanding cannot be achieved using RfDs as the guide. Furthermore, “safety” can be defined quantitatively, as the dose associated with a specified level of (presumably very small) risk. Whether and to what extent regulators move toward such a heavier reliance on quantitative assessments for public health decisions is unknown, but their decisions will be improved if they do.
To close where we began: perhaps Rachel Carson's most significant contribution to the whole enterprise of risk analysis relates to the fact that she was a superb communicator, and forced public attention on what is now universally acknowledged as a necessary component of risk-based decision-making. Decisions to protect public health can no longer be taken without full disclosure of their bases. All public health officials and manufacturers and users of chemicals have, thanks in great part to the influence of Silent Spring, the important responsibility of communicating to the public why their chemical risk management decisions ensure public health ‒ and also environmental ‒ protection.
References
1. National Research Council (1983), Risk Assessment in The Federal Government: Managing the Process, National Academy Press, Washington, D.C.
2. Merrill, R. (2001), Regulatory Toxicology, in Casarett & Doull's Toxicology, 6th edn., McGraw Hill, New York.
3. Rodricks, J. V. (2007), Calculated Risks. The Toxicity and Human Health Risks of Chemicals in our Environment, 2nd edn., Cambridge University Press, Cambridge, UK.
4. National Research Council (1977), Drinking Water and Health, Vol. 1, National Academy Press, Washington, D.C.
5. Lehman, A. J., et al. (1955), Procedures for the appraisal of the toxicity of chemicals in foods, drugs, and cosmetics. Food Drug Cosmetic Law Journal, 10, 679-748.
6. Dourson, M. and Patterson, J. (2003), A 20-Year Perspective on the Development of Non-Cancer Risk Assessment Methods, Journal of Human and Ecological Risk Assessment, 9, 1239-1252.
7. Lowrance, W.W. (1976), Of Acceptable Risk: Science and the Determination of Safety, William Kaufmann, Inc., Altos, G.A.
8. World Health Organization (2002), Risk assessments for Salmonella in eggs and broiler chickens, Geneva.
9. Rodricks, J. V. (2003), Approaches to risk assessment for macronutrients and amino acids. Journal of Nutrition, 133, 6 (Suppl.).
10. National Research Council 2009, Science and Decisions: Advancing. Risk Assessment. National Academy Press. Washington, D.C.
This article is based on a presentation by Dr Rodricks at the RSC ECG & Toxicology Group meeting, The legacy of Rachel Carson, held at Burlington House on 2nd October 2012.