关键词: |
Human machine systems, Human systems integration, Human computer interface, Military forces(united states), Military operations, Autonomous systems, Artificial intelligence, Cognitive systems engineering, Systems engineering, Human-machine teaming, Human-systems interaction, Human-robot teaming, Man-machine interface, Human-computer interaction, Human-robot interaction, Trust in automation, Human-automation trust, Integrity-based trust violation, Competence-based trust violation, Autonomous learning systems |
摘要: |
Successful human-machine teaming requires humans to trust machines. While many claim to welcome automation, there is also mistrust of machines which may stem from more than competence concerns. Human-automation trust research to date has considered automation capable of competence-based trust violations (CBTV), but integrity-based trust violations (IBTV) should also be studied. Future advances in artificial intelligence and cyber warfare could result in the perception and possible reality of automation committing IBTVs. The current study paired human participants with an automated teammate to complete a sequence of computer-based visual search and investment tasks. During each session, the automation committed either an IBTV or CBTV, and participants trust responses were measured through self-reported trust, trust-based reliance behavior, time spent making reliance decisions, and investment behavior. The results found that (a) average self-reported trust in the automation was significantly lower in the IBTV than the CBTV condition, (b) personal investment behavior was more consistent with reported trust levels than reliance behavior and may be a better gauge of trust, and (c) trust behavior differed more between IBTV and CBTV conditions among participants who invested more in their automated teammate. The differences found in participant trust response between conditions are enough to warrant further research into how humans react to automation committing IBTVs. |