14th ACM Workshop on
Artificial Intelligence and Security
November 15, 2021 — Virtual Event
co-located with the 28th ACM Conference on Computer and Communications Security
Photo: Pixabay


Title: Adversarial Machine Learning: Robust Defenses via Synergistic Integration of Logical and Statistical Reasoning
Prateek Mittal, Princeton University, USA


While machine learning-based systems are transforming society, they face serious security and privacy risks in adversarial environments. Even in the face of significant efforts by the security and privacy research community, the current generation of defenses are often defeated by the next generation of adaptive attacks that are aware of the defense mechanisms.

In this talk, I will discuss how a synergistic integration of logical and statistical reasoning can lead to the design of robust defense mechanisms. As a concrete example of this approach, consider the threat of the adversarial patches in image classification, where an adversary can introduce adversarial perturbations in a localized area of the image at inference time to induce misclassification. Such attacks can be realized in the physical world by attaching the adversarial patch to the object to be misclassified, and defending against such attacks has been a challenging problem. I will introduce the PatchGuard and PatchCleanser frameworks that combine logical and statistical reasoning to mitigate the threat of adversarial patches. Notably, both frameworks neutralize adversarial patches in a certifiably robust manner, ending the arms race with an adaptive attacker. I will conclude with a broader discussion of adversarial examples as well as the implications of the proposed approach for privacy-preserving learning.


Prateek Mittal is an Associate Professor of Electrical and Computer Engineering at Princeton University, where he is also affiliated with Computer Science and the Center for Information Technology Policy. He is interested in the design and development of privacy-preserving and secure systems. A unifying theme in Prateek’s work is to manipulate and exploit structural properties of data and networked systems to solve privacy and security challenges facing our society. His research has applied this distinct approach to widely-used operational systems, and has used the resulting insights to influence system design and operation, including that of the Tor network and the Let’s Encrypt certificate authority, directly impacting hundreds of millions of users.

He is the recipient of faculty research awards from Intel, IBM, Google, Cisco, Facebook, Siemens, Qualcomm, the NSF CAREER award (2016), the ONR YIP award (2018), the ARO YIP award (2018), Princeton University’s E. Lawrence Keyes, Jr. award for outstanding research and teaching (2017), and Princeton innovation award (2015, 2017, 2018). He has received several outstanding paper awards, including at ACM CCS, and has been named on the Princeton Engineering Commendation List for Outstanding Teaching six times. His work on securing Internet domain validation and on understanding privacy of smart TV devices was awarded the runnerup for the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies (2020, 2021). He has served on the editorial board of the Privacy Enhancing Technologies Symposium (PETS), the steering committee of the USENIX Security and AI Networking Conference (ScAINet), and has co-chaired the workshops on Hot Topics in Privacy Enhancing Technologies (HotPETS) and Free and Open Communications on the Internet (FOCI).


The workshop will take place as a virtual event.

The following times are on Korea Standard Time time zone.

07:00–07:10 Opening and Welcome
07:10–08:00 Keynote: Adversarial Machine Learning: Robust Defenses via Synergistic Integration of Logical and Statistical Reasoning, Dr. Prateek Mittal
08:00–08:15 Coffee break
08:15-09:55 Session: Adversarial Machine Learning
Unicode Evil: Evading NLP Systems Using Visual Similarities of Text Characters
Authors: Antreas Dionysiou, Elias Athanasopoulos (University of Cyprus)
Adversarial Transfer Attacks With Unknown Data and Class Overlap
Authors: Luke Richards, Andre Nguyen (University of Maryland, Baltimore County, Booz Allen Hamilton), Ryan Capps (Booz Allen Hamilton), Steven Forsyth (NVIDIA), Cynthia Matuszek (University of Maryland, Baltimore County), Edward Raff (University of Maryland, Baltimore County, Booz Allen Hamilton)
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing
Authors: Chawin Sitawarin (University of California, Berkeley), Supriyo Chakraborty (IBM T. J. Watson Research Center), David Wagner (UC Berkeley)
SEAT: Similarity Encoder by Adversarial Training for Detecting Model Extraction Attack Queries
Authors: Zhanyuan Zhang (UC Berkeley), Yizheng Chen (Columbia University), David Wagner (UC Berkeley)
NNoculation: Catching BadNets in the Wild
Authors: Akshaj Kumar Veldanda, Kang Liu, Benjamin Tan (New York University), Prashanth Krishnamurthy, Farshad Khorrami (NYU Tandon School of Engineering), Ramesh Karri (New York University), Brendan Dolan-Gavitt (NYU), Siddharth Garg (New York University)
09:55–10:10 Coffee break
10:10-11:30 Session: Machine Learning for Cybersecurity
Network Anomaly Detection Using Transfer Learning Based on Auto-Encoders Loss Normalization
Authors: Aviv Yehezkel, Eyal Elyashiv, Or Soffer (Cynamics)
A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels
Authors: Robert J. Joyce (University of Maryland, Baltimore County), Edward Raff (Booz Allen Hamilton), Charles Nicholas (University of Maryland Baltimore County)
StackBERT: Machine Learning Assisted Static Stack Frame Size Recovery On Stripped and Optimized Binaries
Authors: Chinmay Deshpande, David Gens, Michael Franz (University of California, Irvine)
Patch-based Defenses against Web Fingerprinting Attacks
Authors: Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao (University of Chicago)
11:30–17:45 Lunch break
17:45-19:45 Session: Machine Learning for Cybersecurity
INSOMNIA: Towards Concept-Drift Robustness in Network Intrusion Detection
Authors: Giuseppina Andresini (University of Bari Aldo Moro), Feargus Pendlebury (King's College London and Royal Holloway, University of London and International Computer Science Institute), Fabio Pierazzi (King’s College London), Corrado Loglisci, Annalisa Appice (University of Bari Aldo Moro), Lorenzo Cavallaro (University College London)
Investigating Labelless Drift Adaptation for Malware Detection
Authors: Zeliang Kan (King's College London), Feargus Pendlebury (King's College London and Royal Holloway, University of London and International Computer Science Institute), Fabio Pierazzi (King's College London), Lorenzo Cavallaro (University College London)
Spying through Virtual Backgrounds of Video Calls
Authors: Jan Malte Hilgefort (TU Braunschweig), Daniel Arp (TU Berlin), Konrad Rieck (TU Braunschweig)
Explaining Graph Neural Networks for Vulnerability Discovery
Authors: Tom Ganz, Martin Härterich (SAP Security Research), Alexander Warnecke, Konrad Rieck (TU Braunschweig)
Automating Privilege Escalation with Deep Reinforcement Learning
Authors: Kalle Kujanpää (Aalto University), Willie Victor (F-Secure), Alexander Ilin (Aalto University)
Automated Detection of Side Channels in Cryptographic Protocols: DROWN the ROBOTs!
Authors: Jan Peter Drees (University of Wuppertal), Pritha Gupta (Paderborn University), Eyke Hüllermeier (LMU Munich), Tibor Jager (University of Wuppertal), Alexander Konze, Claudia Priesterjahn (achelos GmbH), Arunselvan Ramaswamy, Juraj Somorovsky (Paderborn University)
19:45–20:00 Coffee break
20:00-20:40 Session: Privacy-Preserving Machine Learning
FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data
Authors: Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar (IBM Research - Almaden), James Joshi (University of Pittsburgh), Heiko Ludwig (IBM Research - Almaden)
Differential Privacy Defenses and Sampling Attacks for Membership Inference
Authors: Shadi Rahimian (CISPA Helmholtz Center for Information Security), Tribhuvanesh Orekondy (Max Planck Institute for Informatics), Mario Fritz (CISPA Helmholtz Center for Information Security)
20:40–21:00 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline: June 29 July 26, 2021, 11:59 PM (AoE, UTC-12)
  • Reviews released: August 4 August 26, 2021
  • Author response due: August 9 September 03, 2021
  • Acceptance notification: August 12 September 07, 2021
  • Camera ready due: August 30 September 20, 2021 (hard deadline)
  • Workshop: November 15, 2021


Recent years have seen a dramatic increase in applications of artificial intelligence, machine learning, and data mining to security and privacy problems. The use of AI and ML in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. The AISec workshop, now in its 14th year, is the leading venue for presenting and discussing new developments in the intersection of security and privacy with AI and machine learning.

Topics of Interest

Topics of interest include (but are not limited to):

Theoretical topics related to security

  • Adversarial learning
  • Security of deep learning systems
  • Robust statistics
  • Learning in games
  • Economics of security
  • Differential privacy

Security applications

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification and analysis
  • Data anonymization/de-anonymization
  • Security in social networks
  • Big data analytics for security
  • User authentication

Security-related AI problems

  • Distributed inference and decision making for security
  • Secure multiparty computation and cryptographic approaches
  • Privacy-preserving data mining
  • Adaptive side-channel attacks
  • Design and analysis of CAPTCHAs
  • AI approaches to trust and reputation
  • Vulnerability testing through intelligent probing (e.g. fuzzing)
  • Content-driven security policy management & access control
  • Techniques and methods for generating training and test sets
  • Anomalous behavior detection (e.g. for the purpose of fraud detection)
  • Model confidentiality

Submission Guidelines

We invite the following types of papers:

  • Original research papers on any topic in the intersection of AI or machine learning with security, privacy, or related areas.
  • Position and open-problem papers discussing the relationship of AI or machine learning to security or privacy. Submitted papers of this type may not substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop proceedings.
  • Systematization-of-knowledge papers, which should distill the AI or machine learning contributions of a previously-published series of security papers.

The authors can specify the paper type in the submission form. Paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and we recommend using the ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.

Submission Site

Submission link: https://aisec2021.hotcrp.com.

All accepted submissions will be presented at the workshop and included in the ACM workshop proceedings.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Best Paper Award

As in the previous editions of this workshop, we would honor outstanding contributions. To this end, we will award the best paper. The best paper will be selected by the reviewers among all the submitted papers.

In the previous edition, Edward Raff, Richard Zak, Gary Lopez Munoz (Booz Allen Hamilton); William Fleming (U.S. Navy); Hyrum Anderson (Microsoft); Bobby Filar (Elastic); Charles Nicholas (University of Maryland Baltimore County); James Holt (Laboratory for Physical Sciences) were awarded the 2020 AISec Best Paper Award for their work on “Automatic Yara Rule Generation Using Biclustering”.


Workshop Chairs

Steering Committee

Program Committee

  • Armin Wasicek, Avast Inc; Technical University Vienna
  • Bobby Filar, Elastic
  • Brad Miller, Google
  • Clarence Chio, UC Berkley
  • Davide Maiorca, University of Cagliari
  • Edward Raff, Booz Allen Hamilton
  • Erwin Quiring, TU Braunschweig
  • Fabio Pierazzi, King's College London
  • Gang Wang, University of Illinois
  • Hyrum Anderson, Microsoft
  • Ilia Shumailov, University of Cambridge
  • Konrad Rieck, TU Braunschweig
  • Lei Ma, University of Alberta
  • Lorenzo Cavallaro, University College London
  • Luis Muñoz-González, Imperial College London
  • Markus Duermuth, Ruhr Uni­ver­si­ty Bo­chum
  • Milenko Drinic, Microsoft
  • Pavel Laskov, University of Liechtenstein
  • Pratyusa K. Manadhata, Facebook
  • Sagar Samtani, Indiana University
  • Sam Bretheim, Craigslist
  • Scott Coull, FireEye
  • Weilin Xu, Intel Labs
  • Shiqi Wang, Columbia University
  • Kexin Pei, Columbia University
  • Tianhao Wang, Caniege Mellon University
  • Matthew Jagielski, Northeastern University
  • Yacin Nadji, Corelight Inc
  • Sanghyun Hong, University of Maryland
  • Florian Tramèr, Stanford
  • Maura Pintor, University of Cagliari
  • Luca Demetrio, University of Cagliari
  • Kathrin Grosse, University of Cagliari
  • Yang Zhang, CISPA Helmholtz Center for Information Security
  • Feargus Pendlebury, King’s College London and ICSI; UC Berkeley
  • Daniel Arp, Technische Universität Braunschweig
  • Francesco Croce, University of Tübingen
  • Giovanni Apruzzese, University of Liechtenstein
  • Guillermo Suarez-Tangil, IMDEA Networks Institute and King’s College London
  • Christian Wressnegger, Karlsruhe Institute of Technology
  • Benjamin M. Ampel, The University of Arizona
  • Eric Wong, MIT