Breaking Down Barriers

Sept. 1, 2016
Third-party testing can help new technologies gain acceptance

About the author: C. Bruce Bartley is president of Bartley Water Associates LLC. Bartley can be reached at [email protected] or 734.883.3639.

undefined

Innovators of new technologies may find they face barriers to the acceptance of their technologies in drinking water applications due to concerns that new and unproven technologies could unexpectedly and adversely affect drinking water. Engineers who specify drinking water technologies often are more comfortable with existing technologies than new technologies.    

Some of the most commonly reported barriers include:

  • Uncertainty about a new technology’s capabilities;
  • Lack of uniform guidelines for testing;
  • Lack of pre-qualification criteria before pilot tests;
  • Exaggerated performance claims;
  • Limited access to technical data;
  • Lack of testing in a “real world” setting;
  • “Maintenance-free” equipment claims; and
  • Limited involvement of the U.S. Environmental Protection Agency (EPA) in technology testing.

Addressing these barriers may seem daunting, but third-party testing can be a key step in alleviating concerns and advancing innovative drinking water technologies. For example, in 1995, ultraviolet (UV) disinfection technology was considered new and innovative for the treatment of Cryptosporidium. In 1998, EPA’s Environmental Technology Verification (ETV) program, along with other research, demonstrated UV’s effectiveness in inactivating Cryptosporidium. Now UV technology is widely accepted and used in drinking water treatment. In large part this was due to independent testing conducted through the ETV program. 

A crucial component of advancing innovative technologies is quickly gaining the acceptance of engineers and regulatory agencies. Third-party testing and certification can help accelerate the acceptance process.  

The proposed third-party approach described herein was derived from the lessons of EPA’s ETV program and the Association of State Drinking Water Administrators’ (ASDWA) State Alternative Technology Approval Protocol (1995). In each of the programs and documents, there were five common steps to enhance state regulatory acceptance of emerging and innovative technologies.

Step 1: Determine the Material Safety of the Technology

Many states and consultants require knowledge of material safety prior to any other criteria when considering a new technology. Almost all states require compliance with NSF/ANSI Standards 60 and 61, which assess whether toxic chemicals in products leach into drinking water. Requiring certification to NSF/ANSI 60 and 61 is the first criterion before undertaking more expensive testing. Depending on circumstances, it is also possible that this might be the only criterion to meet. For example, in cases where the only health risk is from the chemical used in the technology or materials in contact with drinking water, compliance with NSF/ANSI 60 and 61 may be the only criterion needed for acceptance.  

Step 2: Define the Drinking Water Issue the Technology Solves

Clearly defining the water issue the technology claims to solve, can make evaluation or testing more cost-effective. Often, states or engineering companies only have evaluation criteria that pertain to conventional technologies. The evaluation criteria may not address the water issue to be resolved by the new technology.  

Step 3: Define Requirements & Criteria Based on Public Health Risk

There are three paths to consider in the assessment of the public health risk of a new technology. The first is whether the technology will impact or affect an acute health risk, such as waterborne pathogens. The second path is based on whether the technology will impact or affect a chronic health risk, such as a carcinogen. Finally, if the technology does not impact or affect any health risk, i.e., it primarily affects the operation and maintenance within the treatment plant or its distribution system, a less rigorous performance approach would be acceptable. More information and scrutiny likely would be needed for acute and chronic health risks.

The criteria for evaluation, testing or certification requirements should depend on the level of risk to public health. For example, a dissolved oxygen (DO) sensor within a distribution system may not have as many requirements as a nitrate sensor at the treatment plant, based on the difference in the public health risk associated with the chemical measured. Another example is a technology designed to reduce taste and odor via a more efficient iron reduction method. The criteria for assessing a technology impacting aesthetics may have fewer or different requirements than a new technology designed to inactivate pathogens. This risk-based approach was used effectively in the ETV program to prioritize protocols and their requirements. 

Step 4: Assess the Efficacy of the Technology

Based on the risk associated with the technology and the water issue being resolved, it is possible little testing will be needed. For example, with a DO sensor, a series of split samples where the sensor reading is corroborated with lab tests may be sufficient proof of accuracy and hence efficacy. The complexity and comprehensiveness of testing and data will be determined in the previous steps and will affect the efficacy protocols of this step. Also, minimum requirements established in a document or protocol would help assure regulators they are using sound science in their decisions.

Most likely, states, utilities and engineers will want to see independent, quality data on performance. However, they may not need ETV-quality data for certain products. On the other hand, more significant requirements would be needed to evaluate a technology treating Legionella in a distribution system. The experimental design of testing would be determined based on public health risk.    

Step 5: Determine Operational Efficacy & Cost-Effectiveness

Both the ETV field-based protocols and ASDWA 1995 protocol include requirements to collect operations and maintenance (O&M) information. O&M information includes estimates of costs to operate and maintain the technology, such as labor, electricity and water loss.   

Both the ETV and ASDWA protocols attempt to capture the data needed for decision makers. ETV stakeholders always debated how much information was needed and how it should be collected and shared. Stakeholders correctly suggested that any O&M information from existing facilities needed to be captured and reported. Often, testing is performed in controlled conditions in laboratories. However, field-generated data tend to reflect real-world conditions. This was perhaps one of the most valuable pieces of information gathered from ETV field tests.  

Defining Independence of Data

This final recommendation attempts to define the independence of data and engage those producing such data. The recommendation from the ASDWA 1995 protocol serves as a starting point: “Information can be obtained from the followings sources (in order of preference):

  • Accredited third-party verifier or certifier;
  • Recognized third-party independent test data;
  • Pilot study data;
  • Approval from other states, countries or federal agencies; and
  • Manufacturer’s test data.”  

An accredited third-party certifier can best be defined as an organization listed by an accredited organization such as the American National Standards Institute. Any organization that meets EPA’s quality assurance requirements also would be considered independent, such as EPA Cluster Centers and EPA-funded Small Systems Research Centers. 

The second item on the recognized third-party data list is a challenge. Based on the ETV program and its criteria for accepting secondary data, reports on efficacy would be acceptable if a laboratory manager signed and attested to the quality of the data and the professional engineer responsible for the evaluation or testing used the P.E. seal attesting to the data and report independence. Another part of the EPA criteria for secondary data is that the quality controls used in reports must align with accepted protocols and practices.    

In the ETV program, manufacturer’s data were routinely found limited in not having adequate or pertinent quality controls. Manufacturer’s data are best used in early stages of development, but likely would not be acceptable during regulatory review due to potential conflict of interest. 

Since the ETV program ended, there is no longer a program making centrally located, independent test data available to engineers and regulators. There are now many sources of independent evaluations. Consequently, companies may need to have their technology tested by many different organizations. The testing could be inappropriate, as test organizations and regulators use default criteria for conventional technologies.

An outline of a program using lessons learned from the ETV program and ASDWA protocols is an approach to streamline testing based on public health risk. An organization or government agency with a history of independence should oversee this new approach and be a repository of data generated by many sources. 

Sponsored Recommendations

ArmorBlock 5000: Boost Automation Efficiency

April 25, 2024
Discover the transformative benefits of leveraging a scalable On-Machine I/O to improve flexibility, enhance reliability and streamline operations.

Rising Cyber Threats and the Impact on Risk and Resiliency Operations

April 25, 2024
The world of manufacturing is changing, and Generative AI is one of the many change agents. The 2024 State of Smart Manufacturing Report takes a deep dive into how Generative ...

State of Smart Manufacturing Report Series

April 25, 2024
The world of manufacturing is changing, and Generative AI is one of the many change agents. The 2024 State of Smart Manufacturing Report takes a deep dive into how Generative ...

SmartSights WIN-911 Alarm Notification Software Enables Faster Response

March 15, 2024
Alarm notification software enables faster response for customers, keeping production on track