Modeling and Algorithm Development for Adaptive Adversarial AI for Complex Autonomy

Jessica L. Brown, Timothy C Havens, Department of Computer Science, Michigan Technological University ,1400 Townsend Drive, Houghton MI 49931.

As autonomous vehicles (AVs) advance, there’ll be an increase in utilization for military applications.  AVs use passive and active sensors to image their environment, which are either used alone or fused together to accomplish the basic mobile autonomy tasks.  Despite the advantages of fully autonomous AVs, their cybersecurity is a pertinent concern.  Malfunctions or malfeasance could have devastating consequences, including losses of life and infrastructure.  In order to reduce this risk, Michigan Technological University, University of Missouri, and US Army ERDC are collaborating.  In a 4-year project, this team will provide a qualitative and quantitative analysis of different methods for disrupting the sensing capability of AVs and then develop more robust algorithms to prevent these disruptions.  

We have begun by investigating ways that YOLO (deep network object detection) can be disrupted.  The technique used is to create object perturbations that cause misclassification.  Using pixelation and noise generation, we’ve been able to illustrate an experiment with obscuration of a ‘fork’ class of objects. This demonstration shows that even slight pixelation or addition of noise to the fork does in fact significantly degrade detection performance. Objects of more military-relevance are being considered for further experiments, such as people, street signs, and vehicles.

In upcoming months, simulations will begin using AirSim.  A variety of experiments will be conducted, including: following a wall, approaching a forked path, and other situations that AVs would regularly encounter.  Upon completion, the same experiments will be conducted in a controlled maze environment.  Then, the next step is to assess and evaluate the data.  The variety of LIDAR and image attacks will be tested to see if they affect the experiment outcomes, and if so by how much.  Through this process, potential threats to AVs will be identified.  From this stage, experiments will begin focusing on attack prevention.

Additional Abstract Information

Presenter: Jessica Brown

Institution: Michigan Technological University

Type: Poster

Subject: Computer Science

Status: Approved

Time and Location

Session: Poster 5
Date/Time: Tue 12:30pm-1:30pm
Session Number: 4006