摘要: |
All levels of autonomy in vehicles face a critical problem due to the fragility and lack of robustness of state of the art image classifiers to perturbations in the input image. Specifically, it has been repeatedly shown that classifiers that enjoy extremely high accuracy on test sets and challenge sets, are remarkably susceptible to misclassifying images that have small, but planted, perturbations. Stop signs can be misclassified as yield signs, with modifications that are imperceptible to the casual human observer.
Many researchers have identified this as a critical problem for neural networks. The research team's approach grows directly out of a previously funded D-STOP project. In that work, the team has been working on developing fast stochastic gradient descent (SGD)-based algorithms for large scale inference. The research team's preliminary experiments have demonstrated that those ideas can in fact be used to defend against these so-called adversarial attacks. The research team's proposal is to study how the natural exploration of the sample space that is drive by their SGD approach can be mapped to implicitly define the natural manifold of images. The research team conjectures that this is the critical concept for defending against adversarial attacks. |