Technical Report 126

Areas of Opportunity for Exposure Science in the next 2-5 years

Exposure science is an ebullient and ever-developing area of research driven in part by the number and volume of chemicals being produced. Advances in analytical capabilities, increased public awareness and access to information on chemical hazard and exposure, as well as consumer market development also warrant continuous advancement of research in exposure assessment. With respect to cosmetic and personal care products in Europe, the regulatory bans on animal testing has resulted in exposure becoming more important, as it is increasingly recognised exposure and toxicokinetics are far more discriminating determinants of risk than is hazard. Scientists routinely work to develop in vitro, in silico and modelling approaches as alternatives to more traditional toxicology methods for use in safety assessments. Good exposure modelling is essential so that in vitro doses can be translated to realistic exposure scenarios in consumers, and in reverse to allow safety assessments going forward in the new paradigm.

Attempts have been made by the scientific community to prioritise chemicals based on in vitro high-throughput screening assays and focus research on those substances that need to be further tested for potential toxicity (http://www2.epa.gov/chemical-research/toxicity-forecasting). Alongside chemical hazard characterisation, developing companion methods for high-throughput exposure assessment is also receiving much attention (http://www2.epa.gov/chemical-research/rapid-chemical-exposure-and-dose-research). Models like SHEDS-HT, developed and used under ExpoCast framework by the U.S. EPA, provide estimates of multi-source and multi-route exposure for thousands of chemicals. Combining large-scale exposure estimations with high-throughput hazard predictions provides the capability to develop rapid risk-based screening for chemicals, which are most in need of additional testing. However, the improvement of modelling tools for realistic exposure assessment is still an ongoing process, which attempts to identify and address existing challenges.

For refined/sophisticated exposure modelling, detailed knowledge about the actual sources of chemical exposure, related pathways and uptake mechanisms is essential. Assessment of aggregate exposure, i.e. exposure occurring via multiple (equally important) sources/routes, requires a systematic approach. Numerous research efforts have been directed towards accumulating raw input data for comprehensive exposure modelling in residential and consumer settings, as well as towards the development of validated modelling approaches for realistic exposure predictions (PACEM, Merlin-Expo, INTEGRA, CARES, Creme Care & Cosmetics). The first aspect encompasses data collection, e.g. information on products composition and co-use, chemical concentrations in consumer products and contact media, activity records and product use patterns, population biometric details, etc. It is also important to account for the ingredient’s prevalence or frequency of occurrence (chemical occurrence) in a specific product category. Should reliable data on the market fraction of a specific product category containing the substance of interest be available, the exposure predictions could be much more accurate compared to the estimates obtained assuming 100% prevalence. To date, the exposure input data are only available for certain types of consumer products (e.g. cosmetics and personal care, cleaning products) and are scattered across various consumer product databases and scientific literature. It would be useful to develop a narrative for each exposure assessment to indicate when obtaining additional information would be of greatest use. For example, in some cases where the lower tier exposure estimate associated with a single source are very conservative, they might exceed higher tier aggregate estimates (for multiple sources) that use more realistic data. In such instances the lowest tier single source value might be sufficient to account for aggregate sources as well, due to the conservative assumptions. In these cases, the narrative should detail areas of conservatism in the exposure assumptions enabling the assessor to focus refinement efforts on areas that would have the greatest impact, if required. Wider engagement of industry, public and regulators into generation, harmonising and management of input data related to consumer exposure will foster the advances and predictive power of aggregate exposure models. One example where this has been successfully carried out is the ongoing effort by the Research Institute for Fragrance Materials (RIFM), who routinely gather fragrance use levels of fragrance materials in a variety of personal care products and cosmetics using an online data portal, ultimately for integration into the Creme RIFM aggregate exposure model.

Validation of predicted exposure is another important aspect to be considered while developing and advancing exposure tools. Validation normally involves the verification of the modelling assumptions and the input parameters (‘model verification’), as well as the evaluation of the magnitude and ranges of the estimated exposure against real world values (‘assessment verification’). The validity of the individual models with regard to their applicability for modelling exposure scenarios should be checked along with the verification of the integrated system of separate models. Addressing verification issues throughout the whole modelling chain from source to dose and for various building blocks of the model guarantees that coincidental correspondence between predictions and measurements at the end of the chain is avoided.

Validation of human exposure predictions normally includes quantitative relation and comparison of the modelling results (i.e. ‘target dataset’) to independent measurements (i.e. ‘verification dataset’) with a specific care given to the verification dataset coming from a representative and comparable population. The exposure estimates can be compared e.g. to human biomonitoring (HBM) data or chemical concentrations measured in different microenvironments (e.g. residential indoor air, house dust), provided that these measurements were collected under similar conditions, i.e. the conditions reflected in exposure model. A physiologically based pharmacokinetic (PBPK) or mechanistic multi-media fate models can then be used to convert/bridge the exposure predictions to either body tissues and fluids or environmental media concentrations, respectively. Thus, it may be worth looking at collecting relevant measurement data in order to substantiate models’ validity in their application risk management purposes. To date, biomonitoring studies have greatly improved understanding of population level exposures, but their usefulness for evaluating exposure model predictions remains limited due to lack of contextual information. The incremental effort of including surveys to help understand exposure sources of biomonitoring participants would enable this information to be better used for exposure model validation and development. Ideally these surveys would include information on dietary patterns, activity patterns, locations, and consumer product use so that total exposures estimated from biomonitoring data could then be compared to the sum of model predictions across all sources.

Further directions for improvement of aggregate exposure models are foreseen from the perspective of model validation with spot sample biomonitoring data. It is deemed appropriate to increase temporal resolution of the consumer exposure model and calculate aggregate individual exposures not on a daily basis but within shorter time intervals (e.g. 6 hours). This approach would be primarily advantageous for validation of aggregate exposure predictions for those compounds that have very short elimination half-lives relative to in-between exposure intervals (e.g. parabens, phthalates), as many of these compounds demonstrated substantial intra-individual, within-day variation in biomarker concentration (Preau et al, 2010). For such analytes direct use of the spot samples from an individual over a single day in reverse dosimetry approaches may result in up to three orders of magnitude variation of the external dose estimates for the same day and individual (Aylward et al, 2012). A highly-resolved consumer exposure model could significantly facilitate interpretation of observed variability in cross-sectional epidemiological studies and assist in design of studies utilising biomonitoring data as markers for exposure.

Linking external dose-based exposure models to biomonitoring data requires the use of physiologically-based pharmacokinetic (PBPK) models, in order to establish the link between the externally applied dose and the resulting concentration measured in a bodily fluid. Additionally, with the advent of alternative non-animal based methods in toxicology, it is likely that PBPK models again will be vital in linking safe-levels derived from in vitro cell systems to doses from external exposure. This is also the case in reverse dosimetry. Therefore, greater effort needs to be placed upon linking PBPK models with more “conventional” exposure models to facilitate better risk assessment in the future.

The probabilistic person-oriented approaches to aggregate consumer exposure modelling can also improve chemical risk assessments. It is, however, challenging to implement probabilistic models when input data are scarce or not available, and the generation of required inputs is both laborious and expensive. It might therefore be worth exploring modern statistical tools that can help in the situations of data paucity, for instance the Bayesian approach (Herring and Savitz, 2005; Crépet and Tressou, 2011). The concept is based on Bayes’ theorem and provides a sound mathematical framework for incorporating prior statistical knowledge (in the form of a probability distribution) about model parameters and updating this knowledge with new data (likelihood). Bayesian analysis could be useful if e.g. one would want to estimate the blood concentration distribution of a specific biomarker in a given population having only few data points (likelihoods). Consideration of a large monitoring dataset available for a similar population (prior distribution) makes it possible to derive a posterior distribution for the former population assuming that the exposure circumstances in both populations are comparable. Such an assumption should be a subject to additional verification utilising (cross-sectional) socioeconomic data, time activity patterns, consumer behavioural information to allow conclusions on exposure similarity. The derived posterior distribution can then be used as an approximation of the "true” concentration distribution of the investigated substance in the population of interest, enabling rigorous validation of exposure modelling. Additionally, and as mentioned previously, approaches should be considered or developed that address whether aggregate exposure assessment is in fact necessary or of use, such as by considering an analogy to the Maximum Cumulative Ratio used in mixture assessment (Price and Han, 2011).

Additionally, there is a potential to enhance predictive power of aggregate exposure models by reducing the model uncertainty due to inappropriate or incomplete reflection of true exposure mechanisms. For example, the potential adsorption of airborne chemicals to interior surfaces can be included, should the information on chemical‘s air: surface partitioning be available (Hodgson et al, 2003). Failure to include these additional sinks and potential secondary emission sources may result in inaccurate estimation of the population inhalation exposure. Furthermore, the development and implementation of computational algorithms for aggregate dermal exposure and risk assessments of the product ingredients identified as potential contact allergens/sensitisers is needed. Here, potency is defined as the relative ability of a chemical to induce sensitisation, which is determined by the quantity of a chemical per unit surface area required for the acquisition of skin sensitisation in a previously immunologically naive individual (induction phase) (van Loveren et al, 2008). Nevertheless, the question remains whether and under which circumstances using external dermal load is the correct way to proceed in a quantitative risk assessment (QRA) of sensitisers, as dermal absorption may be a crucial step in skin sensitisation. There are some, not yet quantifiable, exposure factors that may influence the internal exposure, among others being e.g. the concentration of Langerhans cells (at specific skin sites) that transport the allergen (as hapten) to regional lymph nodes, where it is presented to responsive T-lymphocytes inducing an immune response (sensitisation) (Api et al, 2008). A subsequent exposure will provoke a dermal inflammatory (allergic) reaction. The challenge is therefore to mechanistically model the internal process of sensitisation, including dermal absorption, hapten formation, Langerhans cell transport to the lymph nodes, repeated aggregate exposures from application of multiple consumer products under different scenarios and apply this in QRA.

Another area of opportunity for exposure science is the plethora of devices that are being used to monitor consumer health. On the consumer side, there are a large number of self-reporting smartphone apps and wearable technologies that monitor key exposure determinants such as activity, diet and other health parameters that are relevant to consumer health and potentially exposure. On the more scientific side, there sensor technologies have improved in quality and reduced in size, enabling a greater volume of data to be generated and gathered in order to monitor and assess consumer exposure via controlled studies, by e.g. examining inhaled air in various locations and scenarios. The volume of data generated from either approach is potentially huge, and would require computational platforms capable of handling what is now commonly termed Big Data.