摘要: |
State-of-the-art infrastructure management systems utilize Markov Decision Processes as a methodology for maintenance and rehabilitation (M&R) decision-making. The underlying assumption in this methodology is that an inspection is performed at the beginning of every year, and that inspections reveal the true condition state of the facility, with no error. As a result, after an inspection, the decision maker can apply the activity prescribed by the optimal policy for that condition state of the facility. Previous research has developed a methodology for M&R activity selection, which accounts for the presence of both forecasting and measurement uncertainty. This methodology is the Latent Markov Decision Process (LMDP), an extension of the traditional MDP that does not assume the measurement of facility condition to be necessarily error-free. In this paper, we extend this methodology to include network level constraints. This can be achieved by extending the LMDP model to the network-level problem through the use of randomized policies. We present both finite horizon (transient) and infinite horizon (steady state) formulations of the network-level LMDP. A case study application demonstrates the expected savings in life-cycle costs that result from increasing the measurement accuracy used in facility inspections, and from scheduling inspection decisions optimally. |