The workshop participants discussed specific brainstorm questions that had been communicated together with the programme prior to the workshop:
- How do we enhance utilisation of currently available tools?
- Where is there a need to develop new tools and data to improve exposure assessment?
- How can this be done? What are the practical steps that need to be taken?
- Who should drive efforts to complete the work?
- How do we enhance utilisation of currently available tools? Discussions within the four breakout groups centred on the following:
- Guidance for Fitness of Purpose: Well-stated and validated applicability domains with standardised descriptors recognising the unique factors of each model that determines its fitness for purpose (i.e. “pedigree”) are needed. Models should be validated, and validation across models should also be conducted. Decision trees / templates would help risk assessors (i) formulate the question: what is the scope of the assessment? And, (ii) make choices for the most appropriate model, tool and data source in each set of circumstances. Case studies could be used to exemplify this guidance. Regulators should be involved in the model development and in the development of decision trees and guidance documents early on, in order to increase confidence and trust in the outcome.
- Transparency: Tools, data, documentation and software should be open access. It was noted for example, that within REACH there is archived hazard information, but little in the way of archived exposure information. Transparency must also extend to explaining why the risk assessor chose to use the particular exposure tool and the particular input data (justification narrative)
- Maintenance and Dissemination: Suggestions included awareness-raising through workshops, scientific events and Wikipedia-type tools (cf AOP wiki), which would require ownership at the global level (OECD).
- Where is there a need to develop new tools and data to improve exposure assessment? One breakout group discussed the drivers for developing new tools and concluded that risk perception may not reflect real risk. Thus consumer perception of risk, rather than real risk, may be driving regulatory decisions. The triggers for developing new tools and data should centre on whether existing tools/data are fit for purpose:
- Optimise existing tools first: Rather than develop new tools, first look at how existing tools can be better applied, and sufficiently validated, based on risk considerations: do the existing tools and data provide exposure estimates that reflect real-life exposures? Specific areas that require focus include time integration, sensitivity analysis, accurate input parameters (e.g. retention factor); validation; definitions for read across; Life Cycle Analysis and the inclusion of indirect exposure (especially for dust and articles) vs direct exposure.
- Build databases on consumer information: Databases of relevant consumer information, product composition (chemical concentration and presence probability) across domains is needed more than new models (e.g. biocides used in cosmetics, preservatives, household products, etc). Companies should be encouraged to share their in-house data in an anonymised format that protects intellectual property. This could be managed at trade association level. Prioritisation of substances (e.g. as was done in the Cosmetics Ingredients Review in the US) would be a first step. Input data are needed e.g. for consumer articles and household products, and exposure data is required on sub-populations e.g. infants and children.
- Aggregate Exposure Problem Formulation: Although regulations increasingly call for aggregate exposure assessments, it is not widely known what approach to take in order to determine aggregate exposure and how this will be received from a regulatory perspective. There is a need to get the problem formulation clear first and relate tools and data to this via templates and decision trees. Tier 1 tools are generally adequate and accepted for occupational and consumer exposure, for single product exposure. Low tier assessments should remain the first priority.
- How can improved consumer exposure assessment data and tools be developed? What are the practical steps that need to be taken?
- Stay within the Applicability Domain: Use decision trees and develop validation criteria for fit for purpose.
- Cefic LRI project on data sharing across domains: Tools and data for aggregate exposure specific to domains and product categories exist, but there is little cross-talk across the domains. This needs to be addressed in order to achieve regulatory acceptance. A Cefic LRI project could consider chemical assessments for chemicals ubiquitous across product types that cover different categories: run case studies on these chemicals to identify what tools are available for aggregate exposure assessment, and then assess the level of exposure across these categories.
- Who should drive efforts to complete the work? Regulators and policy makers are needed to identify the requirements for moving this forward and should be engaged early on in the process. Then industry, regulators, academics and others can influence discussions.
- Public Portals to facilitate discussion and share data: (ISES, OECD, IPCS, IP-CHEM). Templates and mechanisms for the effective and transparent sharing of data, based on specific types of use (e.g. SCEDS and sub-PCs may be models to start working from).
- Criteria for fit for purpose and model validation: ISES, OECD (systematic collection/archiving of useful information via IUCLID). IPCS; ECETOC, Regulators e.g. REEG. Collaboration across industry and regulators required.
- Harmonisation of data/models/standards: OECD presents a platform that could be used across geographies. ECHA involvement will also be important. Funding ultimately has to come from industry, taking early input and priority setting from regulators.
There was a strong recommendation for consensus on a robust science-based, fit for purpose framework/guidelines to impose discipline on the quality and adequacy of:
- Data collection/measurements
- Data analysis (tools and applications) – CEN descriptions of what an Exposure Tool/Model should “look like”. Transparency to understand why certain tools are used in certain applications would be necessary. On the other hand, some participants questioned whether the introduction of such an additional layer of ‘bureaucracy’ in the process would be helpful.
- Application of exposure information into exposure-based risk assessment.
Exposure Science might also benefit from:
- Agreed processes to extrapolate the applicability of data in different contexts – for example could worker exposure information be applied into a different exposure domain? Is there best practice for exposure data quality (e.g. weighting criteria?) This does exist in the occupational setting: could it be extended or adapted to consumer exposure? An internationally recognised rating system would help evaluate the quality of data.
- To achieve a systematic programme/framework on “exposure quality” will require broad stakeholder involvement and agreement including OECD, ECHA and different geographies
- A life cycle analysis of exposure
The ECETOC TF report and landscaping document, and this workshop have provided a platform to begin the process of providing clear guidance on exposure assessment. As discussed, unless robust frameworks and guidance are established, variability in exposure measurements and estimates will be common place.
 Cefic Long Range Research has developed “Estimation of Realistic Consumer Exposure to Substances from Multiple Sources and Approaches to Validation of Exposure Models” (LRI-B7-ETHZ) and “Realistic Estimation of Exposure to Substances from Multiple Sources (TAGS)” (LRI-B5-CERTH). Both mentioned in the Human Exposure Data Task Force Report.