原文传递 Using Virtual Active Vision Tools to Improve Autonomous Driving Tasks.
题名: Using Virtual Active Vision Tools to Improve Autonomous Driving Tasks.
作者: Jochem, Todd M.;
关键词: NEURAL NETS, AUTONOMOUS NAVIGATION, COMPUTER VISION, COMPUTERIZED SIMULATION, INPUT, OUTPUT, POSITION(LOCATION), STEERING, MODELS, REAL TIME, REASONING, THESES, GEOMETRIC FORMS, IMAGES, WHEELS, SELF OPERATION, GEOMETRY, VIDEO SIGNALS, TRANSPLANTATION, PREPROCESSING, TRANSFORMATIONS.
摘要: ALVINN is a simulated neural network for road following. In its most basic form, it is trained to take a subsampled, preprocessed video image as input, and produce a steering wheel position as output. ALVINN has demonstrated robust performance in a wide variety of situations, but is limited due to its lack of geometric models. Grafting geometric reasoning onto a non-geometric base would be difficult and would create a system with diluted capabilities. A much better approach is to leave the basic neural network intact, preserving its real-time performance and generalization capabilities, and to apply geometric transformations to the input image and the output steering vector. These transformations form a new set of tools and techniques called Virtual Active Vision. The thesis for this work is: Virtual Active Vision tools will improve the capabilities of neural network based autonomous driving systems.
总页数: 23
报告类型: 科技报告
检索历史
应用推荐