关键词: |
Statistical process, Data processes, Vehicle Inspection Database (VID), Procedures, Data collection, Implementation, Analytical software, Recommendation implementations, Electronic Transmission (ET), Environmental Protection Agency (EPA) |
摘要: |
Over the past 10 years, a number of "enhanced" vehicle emissions inspection and maintenance (I/M) programs have been implemented, including programs with both centralized and decentralized inspection networks. Key design elements incorporated into many decentralized enhanced programs include the establishment of a Vehicle Inspection Database (VID) and automated electronic transmission (ET) of all test data on a real-time basis to the VID. Over a dozen states, including some with basic inspection programs, have now incorporated ET and a VID into their decentralized I/M programs. -While not usually considered as such, VIDs and electronic transmission of inspection data are also typical elements of centralized inspection programs (i.e., test data are routinely transferred to a central database). More than a dozen additional states· and the District of Columbia have either contractor- or government-operated centralized inspection programs, all of which are presumed to include electronic data transmission to a central database. The widespread implementation of ET systems has greatly increased I/M programs' accessibility to the resulting vehicle inspection data and other information (e.g., equipment calibration results) recorded by the inspection systems and transmitted to the YID. This in tum has raised state expectations regarding the potential benefits of analyzing the resulting data for a variety of reasons, including quality control/quality assurance (QA/QC) purposes. One key area of interest, particularly for decentralized programs, is conducting statistical analyses on the data in order to identify potential problem test stations and inspectors. Analytical software, typically referred to as "triggers" software, has therefore been developed and is being used by some states to evaluate the success of test stations and inspectors relative to a range of individual performance indicators. A few states are also beginning to use triggers software to evaluate equipment performance. Generally, states are concerned that any public release of specific details regarding their triggers software could potentially result in inspection stations learning how the data are being analyzed. This could in tum lead to stations that might otherwise be identified by the software as problem performers intentionally modifying their behavior to avoid detection. As a result, it is often difficult for states to learn from one another in this area. Similarly, no guidance has been available to-date from EPA on how to integrate trigger approaches into automated ET systems and VIDs. EPA has become interested in developing guidance to aid states in designing, implementing, and using such triggers. Accordingly, Sierra was issued Work Assignment 3-03 to develop and provide EPA with draft recommended guidance aimed at assisting states in using triggers and statistical process control (SPC) analysis to identify potential problem inspection stations, inspectors, and test systems. This report was prepared in response to the work assignment. Specifically, the report addresses such issues as the types of possible triggers and "control" charts, analytical issues to be considered in developing trigger calculations, methods for reporting the results (including control charting), and potential uses of the trigger results. It also provides recommendations regarding the structure of an effective overall QA/QC system. A number of states were contacted and subsequently provided information that was used in preparing this report. Per Sierra's discussions with the individual states, the general approach that is used herein is to refrain from attributing detailed information regarding specific triggers to particular states or I/M programs. State staff were also reluctant to discuss problems encountered in implementing existing triggers systems or other sensitive issues, due to concerns regarding possible negative consequences to their programs of releasing this information. Therefore, as agreed with the states, this information is also presented without attribution. The discussions contained in the report concentrate on YID-based decentralized program implementation issues, since this is the expected primary focus of any triggers software. Notwithstanding this, many of the analytical techniques and system approaches described . herein may also have application to centralized inspection programs. In addition, the same techniques could be applied to decentralized programs that involve data collection via retrieval of computer diskettes from the individual test systems (assuming the retrieved data are subsequently entered into some sort of overall database). |