原文传递 Understanding, Assessing, and Mitigating Safety Risks in Artificial Intelligence Systems.
题名: Understanding, Assessing, and Mitigating Safety Risks in Artificial Intelligence Systems.
作者: Kroll, J; Berzins, V.
摘要: Traditional software safety techniques rely on validating software against a deductively defined specification of how the software should behave in particularsituations. In the case of AI systems, specifications are often implicit or inductively defined. Data-driven methods are subject to sampling error since practicaldatasets cannot provide exhaustive coverage of all possible events in a real physical environment. Traditional software verification and validation approaches maynot apply directly to these novel systems, complicating the operation of systems safety analysis (such as implemented in MIL-STD 882). However, AI offersadvanced capabilities, and it is desirable to ensure the safety of systems that rely on these capabilities. When AI tech is deployed in a weapon system, robot, orplanning system, unwanted events are possible. Several techniques can support the evaluation process for understanding the nature and likelihood of unwantedevents in AI systems and making risk decisions on naval employment. This research considers the state of the art, evaluating which ones are most likely to beemployable, usable, and correct. Techniques include software analysis, simulation environments, and mathematical determinations.
总页数: 67 pages
检索历史
应用推荐