ISSN:
1062-3701
Content:
In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. However, current deep neural networks are easily deceived by adversarial attacks. This vulnerability raises significant - concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, - motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper - by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at https://youtu.be/6AixN90budY.
In:
The journal of imaging science and technology, Springfield, Va. : Soc., 1992, 65(2021), 6, Seite 60408-1-60408-9, 1062-3701
In:
volume:65
In:
year:2021
In:
number:6
In:
pages:60408-1-60408-9
Language:
English
DOI:
10.2352/J.ImagingSci.Technol.2021.65.6.060408
Author information:
Ravi Kumar, Varun
Bookmarklink