Technical Report 126


When performing an exposure assessment to a chemical, it is important to understand what tier the assessment is being performed at. This in turn informs which models and data are the most appropriate. The output of this landscaping exercise indicates some of the available data sources and tools, and what uses they are appropriate for. In general, tools and data exist to support development of exposure estimates for individual consumer products, particularly at a screening level.

For chemicals where consumer exposures are generally low, occur infrequently, and/or presence in co-used products is uncommon, aggregate exposure assessment may not be required. In situations where it is appropriate to assess aggregate exposure, and the initial exposure assessment shows a likelihood of being unrealistically high, a more refined assessment should be performed. If moving to a higher tier assessment, then it will be necessary to identify data and a methodology that can be used to generate realistic aggregate exposure. Screening level tools are not very useful since their conservative single use estimates will lead to very biased over conservative estimates of aggregate exposure when simply added up. As can be seen from cosmetics or food, most tools that exist for aggregate exposure assessments are probabilistic tools that allow the user to define a distribution for parameters that reflect (true) variability and uncertainty for both the input and the estimates exposure (output), and are built upon databases of consumer habits and practices that detail real co-use and non-use of products. In addition, such tools also allow the user to study the impact of different parameters on the outcome and study the uncertainty drivers in the predicted exposure. This information can be used to determine what additional data should be collected. As mentioned for the broader chemical domain such tools do not exist. Depending on the focus of the exposure assessment the assessor may feel that sufficient information is available for part of the product groups included (e.g. household products) to perform a (partially) probabilistic assessment and can generate a simple probabilistic algorithm using an off-the-shelve statistical program. The drawback is that this will generally be labour intensive and some basic understanding of the statistical programs is needed. In addition, such a model will not be validated and may overlook certain important factors especially if availability of data is limited.

In the foods domain, aggregate exposure is performed somewhat as a matter of course and therefore there is a greater abundance of data available than in other domains. One of the reasons for this is that food consumption surveys are often routinely carried out for the purposes of nutritional status assessment, and the data can be subsequently used for chemical exposure assessment as it typical details co-consumption of foods at the individual level. Due to the challenges of characterising and capturing the complete diet of consumers, it is known there are some significant shortcomings of these datasets, particularly under-reporting and the lack of data on infrequently consumed foods. Equally, lack of access to necessary raw data on food consumption is an issue in Europe and so thus represents a barrier to be able to perform higher-tier exposure assessments. Finally, as per other domains, there is a lack of information on chemical concentrations for formulations that are proprietary, which again is a required input for refined exposure assessment.

To be able to perform a high tier aggregate exposure assessment that will be realistic, data on product co-use, chemical concentration and chemical occurrence in formulation are vital to prevent a level of conservativeness that will generally result in very biased and unrealistic results. Market survey databases such as those provided by Mintel, Kantar Worldpanel and Euromonitor International are becoming increasingly used to provide such data in exposure assessments. This is because they can be used to refine exposure parameters by providing data such as the occurrence of a chemical in food using labelling data in a given food category. Other examples include "crowdsourced” data such as the Swiss database Codecheck, where consumers can submit labelling information to a publicly available database. There are undoubtedly opportunities to exploit such data sources and other "Big Data” in exposure assessment, such as data from diet tracker smartphone applications which have the potential to provide unprecedented sample sizes and detail. However, many of scientific issues and associated uncertainties still need to be addressed in a comprehensive manner.