A backdoor is a covert functionality in a machine learning model that causes it to produce incorrect outputs on inputs with a certain “trigger” feature. Recent research on data-poisoning and trojaning attacks has shown how backdoors can be introduced into ML models -- but only for backdoors that act as universal adversarial perturbations (UAPs) and in an inferior threat model that requires the attacker to poison the model and then modify the input at inference time.
I will describe a new technique for backdooring ML models based on poisoning the loss-value computation, and demonstrate that it can introduce new types of backdoors which are different and more powerful than UAPs, including (1) single-pixel and physically realizable backdoors in ImageNet, (2) backdoors that switch the model to an entirely different, privacy-violating functionality, e.g., cause a model that counts the number of faces in a photo to covertly recognize specific individuals; and (3) semantic backdoors that do not require the attacker to modify the input at inference time. Oh, and they evade all known defenses, too.
(Joint work with Eugene Bagdasaryan)
BiographyVitaly Shmatikov is a Professor of Computer Science at Cornell University and Cornell Tech. Before joining Cornell, he worked at the University of Texas at Austin and SRI International. He received the Caspar Bowden PET Award for Outstanding Research in Privacy Enhancing Technologies three times, in 2008, 2014, and 2018, and was a runner-up in 2013. Dr. Shmatikov’s research group won the Test-of-Time Awards from the IEEE Symposium on Security and Privacy (S&P / “Oakland”) and the ACM Conference on Computer and Communications Security (CCS), multiple Best Practical Paper and Best Student Paper Awards from IEEE S&P and NDSS, and the NYU-Poly AT&T Best Applied Security Paper Award. Dr. Shmatikov earned his PhD in computer science and MS in engineering-economic systems from Stanford University.
|10:00–10:10||Opening and Welcome|
|10:10–11:00||Keynote: How to Backdoor a Deep Learning Model, Prof. Vitaly Shmatikov|
|11:30-12:50||Session: Adversarial Machine Learning|
|Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence
Authors: Chang Liao (Nanyang Technological University & Huawei International Pte Ltd); Yao Cheng, Chengfang Fang (Huawei International Pte Ltd); Jie Shi (Huawei International, Singapore)
|Towards Certifiable Adversarial Sample Detection
Authors: Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson (University of Cambridge)
|E-ABS: Extending the Analysis-By-Synthesis Robust Classification Model to More Complex Image Domains
Authors: An Ju, David Wagner (UC Berkeley)
|SCRAP: Synthetically Composed Replay Attack Procedure vs. Adversarial Machine Learning Attacks against Mouse-based Biometric Authentication
Authors: Jesús Solano, Christian Lopez, Esteban Rivera, Alejandra Castelblanco, Lizzy Tengana, Martin Ochoa (AppGate Inc.)
|14:00-15:20||Session: Malware Detection|
|Mind the Gap: On Bridging the Semantic Gap between Machine Learning and Malware Analysis
Authors: Michael R. Smith, Nicholas T. Johnson, Joe B. Ingram, Armida J. Carbajal (Sandia National Labs); Bridget I. Haus (USC Viterbi School of Engineering); Eva Domschot, Ramyaa Ramyaa (New Mexico Tech); Christopher C. Lamb, Stephen J. Verzi, W. Philip Kegelmeyer (Sandia National Labs)
|The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search
Authors: Sicco Verwer, Azqa Nadeem, Christian Hammerschmidt, Laurens Bliek (Delft University of Technology); Abdullah Al-Dujaili, Una-May O'Reilly (MIT CSAIL)
|Automatic Yara Rule Generation Using Biclustering
Authors: Edward Raff, Richard Zak, Gary Lopez Munoz (Booz Allen Hamilton); William Fleming (U.S. Navy); Hyrum Anderson (Microsoft); Bobby Filar (Elastic); Charles Nicholas (University of Maryland Baltimore County); James Holt (Laboratory for Physical Sciences)
|Flow-based Detection and Proxy-based Evasion of Encrypted Malware C2 Traffic
Authors: Carlos Novo, Ricardo Morla (University of Porto and INESC TEC)
|16:00-17:00||Session: Machine Learning for Security and Privacy|
|eNNclave: Offline Inference with Model Confidentiality
Authors: Alexander Schlögl, Rainer Böhme (University of Innsbruck)
|Risk-based Authentication Based on Network Latency Profiling
Authors: Esteban Rivera (Appgate Inc.); Lizzy Tengana, Jesús Solano, Alejandra Castelblanco, Christian Lopez, Martin Ochoa (AppGate Inc.)
|Disabling Backdoor and Identifying Poison Data by using Knowledge Distillation in Backdoor Attacks on Deep Neural Networks
Authors: Kota Yoshida, Takeshi Fujino (Ritsumeikan university)
Recent years have seen a dramatic increase in applications of artificial intelligence, machine learning, and data mining to security and privacy problems. The use of AI and ML in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. The AISec workshop, now in its 13th year, is the leading venue for presenting and discussing new developments in the intersection of security and privacy with AI and machine learning.
Topics of interest include (but are not limited to):
Theoretical topics related to security
Security-related AI problems
We invite the following types of papers:
Paper submissions must be at most 10 pages in double-column ACM format (note: pages must be numbered), excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and we recommend using the ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.
Submission link: https://aisec2020.hotcrp.com.