Title talk: Adversarial Malware Binaries
Adversarial Malware Binaries
Deep learning is a popular paradigm used to construct machine learning classifiers. Recent work shows that this paradigm can also be used to improve the performance of malware detection and classification. However, current baseline machine learning methods, including deep learning, are not designed with taking into account possible adversarial attacks. These attacks consist of making small changes to the input features that cause malware to be misclassified. We show a study on such attacks executed on deep learning-based malware classifiers based on raw bytes, showing how to efficiently induce instability in the classification results. This also gives us a clue on what deep learning-based malware classifiers really learn.
About Bojan Kolosnjaji
Bojan Kolosnjaji is a researcher and PhD candidate at the Technical University of Munich. His research activities revolve around machine learning and its applications in large-scale malware detection and analysis. In particular, Bojan focuses on the development of robust topic modeling and neural network methods for information retrieval from malware code and execution traces. His secondary research interests are in anomaly detection, adversarial learning, and resource-constrained learning. He has frequently presented his work at scientific security and machine learning conferences.