The UTC Graduate School is pleased to announce that David Schwab will present Doctoral research titled, AN EVALUATION OF THE ROBUSTNESS OF THE NATURAL-ADVERSARIAL MUTUAL INFORMATION-BASED DEFENSE AND MALWARE CLASSIFICATION AGAINST ADVERSARIAL ATTACKS FOR DEEP LEARNING on 02/24/2023 at 2:00 PM in EMCS 312. Everyone is invited to attend.
Here is the information for the Zoom link for my dissertation for those of you who can not be present in person.
Join from PC, Mac, Linux, iOS or Android: https://tennessee.zoom.us/j/9886989630
Password: phd2023
Computational Science
Chair: Dr. Li Yang
Co-Chair:
Abstract:
In today’s technology driven world, the use of Machine Learning (ML) systems is becoming ubiquitous, albeit often in the background, in many areas of daily life. ML systems are being used to detect malware, control autonomous vehicles, classify images, assist with medical diagnosis, and block internet ads with high precision. Although the use of these ML systems has become widespread in our society, there is the potential for systems used in high-stakes situations to make faulty predictions that can have serious consequences. Recently researchers have shown that even deep neural networks (DNNs) can be “fooled” into misclassifying an input sample that has been minimally modified in a specific way. These modified samples are known as adversarial examples and have been crafted with the goal of causing the target DNN to modify its behavior. It has been shown that adversarial examples can be crafted even when the attacker does not have access to the training parameters and model architecture of the victim DNN. An attack made under this threat model is known as a black-box attack and is made possible due to the transferability of adversarial examples from one model to another. In this dissertation we first present an overview of DNNs and capsule networks, the current known adversarial example crafting methods, defenses against adversarial examples, and possible explanations for the existence of adversarial examples. Next, we explore a novel technique that was recently developed that aims to use mutual information (MI) as an additional feature for the adversarial training of classification models called natural-adversarial mutual information-based defense (NAMID). We will describe our extensive evaluation of NAMID, as well as introduce our novel method for crafting adversarial examples termed MI-Craft. We will also apply NAMID to the domain of malware classification. We will compare MI-Craft to standard projected gradient descent for the creation of adversarial examples, as well as demonstrate the effectiveness of MI-Craft and NAMID under the CIFAR10 and MalImg datasets.