13th ACM Workshop on
Artificial Intelligence and Security
November 13, 2020 — Orlando, USA
co-located with the 27th ACM Conference on Computer and Communications Security
Photo: Wikipedia

Keynote

Title: How to Backdoor a Deep Learning Model
Vitaly Shmatikov, Cornell University and Cornell Tech, USA

Abstract

A backdoor is a covert functionality in a machine learning model that causes it to produce incorrect outputs on inputs with a certain “trigger” feature. Recent research on data-poisoning and trojaning attacks has shown how backdoors can be introduced into ML models -- but only for backdoors that act as universal adversarial perturbations (UAPs) and in an inferior threat model that requires the attacker to poison the model and then modify the input at inference time.

I will describe a new technique for backdooring ML models based on poisoning the loss-value computation, and demonstrate that it can introduce new types of backdoors which are different and more powerful than UAPs, including (1) single-pixel and physically realizable backdoors in ImageNet, (2) backdoors that switch the model to an entirely different, privacy-violating functionality, e.g., cause a model that counts the number of faces in a photo to covertly recognize specific individuals; and (3) semantic backdoors that do not require the attacker to modify the input at inference time. Oh, and they evade all known defenses, too.

(Joint work with Eugene Bagdasaryan)

Biography

Vitaly Shmatikov is a Professor of Computer Science at Cornell University and Cornell Tech. Before joining Cornell, he worked at the University of Texas at Austin and SRI International. He received the Caspar Bowden PET Award for Outstanding Research in Privacy Enhancing Technologies three times, in 2008, 2014, and 2018, and was a runner-up in 2013. Dr. Shmatikov’s research group won the Test-of-Time Awards from the IEEE Symposium on Security and Privacy (S&P / “Oakland”) and the ACM Conference on Computer and Communications Security (CCS), multiple Best Practical Paper and Best Student Paper Awards from IEEE S&P and NDSS, and the NYU-Poly AT&T Best Applied Security Paper Award. Dr. Shmatikov earned his PhD in computer science and MS in engineering-economic systems from Stanford University.

Programme

The workshop will take place as a virtual event.

The following times are on ET time zone.

10:00–10:10 Opening and Welcome
10:10–11:00 Keynote: How to Backdoor a Deep Learning Model, Prof. Vitaly Shmatikov
11:00–11:30 Coffee break
11:30-12:50 Session: Adversarial Machine Learning
Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence
Authors: Chang Liao (Nanyang Technological University & Huawei International Pte Ltd); Yao Cheng, Chengfang Fang (Huawei International Pte Ltd); Jie Shi (Huawei International, Singapore)
Towards Certifiable Adversarial Sample Detection
Authors: Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson (University of Cambridge)
E-ABS: Extending the Analysis-By-Synthesis Robust Classification Model to More Complex Image Domains
Authors: An Ju, David Wagner (UC Berkeley)
SCRAP: Synthetically Composed Replay Attack Procedure vs. Adversarial Machine Learning Attacks against Mouse-based Biometric Authentication
Authors: Jesús Solano, Christian Lopez, Esteban Rivera, Alejandra Castelblanco, Lizzy Tengana, Martin Ochoa (AppGate Inc.)
12:50–14:00 Lunch
14:00-15:20 Session: Malware Detection
Mind the Gap: On Bridging the Semantic Gap between Machine Learning and Malware Analysis
Authors: Michael R. Smith, Nicholas T. Johnson, Joe B. Ingram, Armida J. Carbajal (Sandia National Labs); Bridget I. Haus (USC Viterbi School of Engineering); Eva Domschot, Ramyaa Ramyaa (New Mexico Tech); Christopher C. Lamb, Stephen J. Verzi, W. Philip Kegelmeyer (Sandia National Labs)
The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search
Authors: Sicco Verwer, Azqa Nadeem, Christian Hammerschmidt, Laurens Bliek (Delft University of Technology); Abdullah Al-Dujaili, Una-May O'Reilly (MIT CSAIL)
Automatic Yara Rule Generation Using Biclustering
Authors: Edward Raff, Richard Zak, Gary Lopez Munoz (Booz Allen Hamilton); William Fleming (U.S. Navy); Hyrum Anderson (Microsoft); Bobby Filar (Elastic); Charles Nicholas (University of Maryland Baltimore County); James Holt (Laboratory for Physical Sciences)
Flow-based Detection and Proxy-based Evasion of Encrypted Malware C2 Traffic
Authors: Carlos Novo, Ricardo Morla (University of Porto and INESC TEC)
15:20–16:00 Coffee break
16:00-17:00 Session: Machine Learning for Security and Privacy
eNNclave: Offline Inference with Model Confidentiality
Authors: Alexander Schlögl, Rainer Böhme (University of Innsbruck)
Risk-based Authentication Based on Network Latency Profiling
Authors: Esteban Rivera (Appgate Inc.); Lizzy Tengana, Jesús Solano, Alejandra Castelblanco, Christian Lopez, Martin Ochoa (AppGate Inc.)
Disabling Backdoor and Identifying Poison Data by using Knowledge Distillation in Backdoor Attacks on Deep Neural Networks
Authors: Kota Yoshida, Takeshi Fujino (Ritsumeikan university)
17:00–17:15 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline: June 29 July 06, 2020 (hard deadline), 11:59 PM (AoE, UTC-12)
  • Reviews released: August 4 August 11, 2020
  • Author response due: August 7 August 14, 2020
  • Acceptance notification: August 10 August 17, 2020
  • Camera ready due: August 30 September 06, 2020 (hard deadline)
  • Workshop: November 13, 2020

Overview

Recent years have seen a dramatic increase in applications of artificial intelligence, machine learning, and data mining to security and privacy problems. The use of AI and ML in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. The AISec workshop, now in its 13th year, is the leading venue for presenting and discussing new developments in the intersection of security and privacy with AI and machine learning.

Topics of Interest

Topics of interest include (but are not limited to):

Theoretical topics related to security

  • Adversarial learning
  • Security of deep learning systems
  • Robust statistics
  • Learning in games
  • Economics of security
  • Differential privacy

Security applications

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification and analysis
  • Data anonymization/de-anonymization
  • Security in social networks
  • Big data analytics for security
  • User authentication

Security-related AI problems

  • Distributed inference and decision making for security
  • Secure multiparty computation and cryptographic approaches
  • Privacy-preserving data mining
  • Adaptive side-channel attacks
  • Design and analysis of CAPTCHAs
  • AI approaches to trust and reputation
  • Vulnerability testing through intelligent probing (e.g. fuzzing)
  • Content-driven security policy management & access control
  • Techniques and methods for generating training and test sets
  • Anomalous behavior detection (e.g. for the purpose of fraud detection)

Submission Guidelines

We invite the following types of papers:

  • Original research papers on any topic in the intersection of AI or machine learning with security, privacy, or related areas.
  • Position and open-problem papers discussing the relationship of AI or machine learning to security or privacy. Submitted papers of this type may not substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop proceedings.
  • Systematization-of-knowledge papers, which should distill the AI or machine learning contributions of a previously-published series of security papers.

Paper submissions must be at most 10 pages in double-column ACM format (note: pages must be numbered), excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and we recommend using the ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.

Submission Site

Submission link: https://aisec2020.hotcrp.com.

Committee

Workshop Chairs

Steering Committee

Program Committee

  • Elnaz Babayeva, Avast
  • Armin Wasicek, Avast Inc; Technical University Vienna
  • Brad Miller, Google
  • Clarence Chio, Unit21
  • Davide Maiorca, University of Cagliari
  • Fabio Pierazzi, King's College London
  • Gang Wang, UIUC
  • Hyrum Anderson, Microsoft
  • Konrad Rieck, TU Braunschweig
  • Lorenzo Cavallaro, King's College London
  • Luis Muñoz-González, Imperial College London
  • Markus Duermuth, Ruhr Uni­ver­si­ty Bo­chum
  • Milenko Drinic, Microsoft
  • Pavel Laskov, University of Liechtenstein
  • Pratyusa K. Manadhata, Facebook
  • Sagar Samtani, Indiana University
  • Sam Bretheim, Craigslist
  • Scott Coull, FireEye
  • Yevgeniy Vorobeychik, Vanderbilt University
  • Yizheng Chen, Columbia University
  • Ilia Shumailov, University of Cambridge