原文传递 Developing and Signaling Trust in Synthetic Autonomous Agents (SAAs).
题名: Developing and Signaling Trust in Synthetic Autonomous Agents (SAAs).
作者: Johnson, K. A.
关键词: Trust (psychology), Autonomous agents, Neural networks, Scenarios, Automated driving systems, Saa (synthetic autonomous agents), Artificial neural networks
摘要: Major Goals: Goal 1. The primary goal of this one year research project was to draw on social psychological research in order to specify the morals and values of good drivers that may be available for programming SAAs to make decisions and behave with moral integrity. Goal 2. Our second goal was to begin to test the feasibility of programming value-governed parameters of SAAs, in a newly developed, four-wheel, skid-steer robotic car that resembles a 1:28 scale self-driving car which we refer to as a Go-CHART. Goal 3. Our third goal was to identify the most efficacious signal of programmed moral integrity in order to garner appropriate trust from human operators and the general public.Synthetic Autonomous Agents (SAAs; e.g., self-driving cars, unmanned search and rescue vehicles, lethal autonomous weapons) can accomplish tasks too difficult or risky for humans and we must not fail in preparing for this advancing technology. Yet opponents argue that SAAs should never be developed and, instead, humans must maintain meaningful human control (Roff and Moyes, 2016) in every case because SAAs may fall into enemy hands, become disconnected from their human counterparts, or may initiate undesirable outcomes. One way to overcome this distrust of autonomous agents is to ensure that SAAs behave with moral integrity. Whether or not SAAs are deemed to be true moral agents, we contend they can be programmed to make decisions and to behave as responsible moral agents. To date, morality has generally been conceptualized as either deontological (following rules regardless of the outcome) or utilitarian (accomplishing a worthy goal). However, the two systems often conflict, require the programming of all possible rules or outcomes, and people rarely agree about which system is best (Awad, et al., 2018) (Conway and Gawronski, 2013). As one example, people agree that self-driving cars should never drive on sidewalks (deontological).
报告类型: 科技报告
检索历史
应用推荐