While machine learning-based systems are transforming society, they face serious security and privacy risks in adversarial environments. Even in the face of significant efforts by the security and privacy research community, the current generation of defenses are often defeated by the next generation of adaptive attacks that are aware of the defense mechanisms.
In this talk, I will discuss how a synergistic integration of logical and statistical reasoning can lead to the design of robust defense mechanisms. As a concrete example of this approach, consider the threat of the adversarial patches in image classification, where an adversary can introduce adversarial perturbations in a localized area of the image at inference time to induce misclassification. Such attacks can be realized in the physical world by attaching the adversarial patch to the object to be misclassified, and defending against such attacks has been a challenging problem. I will introduce the PatchGuard and PatchCleanser frameworks that combine logical and statistical reasoning to mitigate the threat of adversarial patches. Notably, both frameworks neutralize adversarial patches in a certifiably robust manner, ending the arms race with an adaptive attacker. I will conclude with a broader discussion of adversarial examples as well as the implications of the proposed approach for privacy-preserving learning.
Prateek Mittal is an Associate Professor of Electrical and Computer Engineering at Princeton University, where he is also affiliated with Computer Science and the Center for Information Technology Policy. He is interested in the design and development of privacy-preserving and secure systems. A unifying theme in Prateek’s work is to manipulate and exploit structural properties of data and networked systems to solve privacy and security challenges facing our society. His research has applied this distinct approach to widely-used operational systems, and has used the resulting insights to influence system design and operation, including that of the Tor network and the Let’s Encrypt certificate authority, directly impacting hundreds of millions of users.
He is the recipient of faculty research awards from Intel, IBM, Google, Cisco, Facebook, Siemens, Qualcomm, the NSF CAREER award (2016), the ONR YIP award (2018), the ARO YIP award (2018), Princeton University’s E. Lawrence Keyes, Jr. award for outstanding research and teaching (2017), and Princeton innovation award (2015, 2017, 2018). He has received several outstanding paper awards, including at ACM CCS, and has been named on the Princeton Engineering Commendation List for Outstanding Teaching six times. His work on securing Internet domain validation and on understanding privacy of smart TV devices was awarded the runnerup for the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies (2020, 2021). He has served on the editorial board of the Privacy Enhancing Technologies Symposium (PETS), the steering committee of the USENIX Security and AI Networking Conference (ScAINet), and has co-chaired the workshops on Hot Topics in Privacy Enhancing Technologies (HotPETS) and Free and Open Communications on the Internet (FOCI).
|07:00–07:10||Opening and Welcome|
|07:10–08:00||Keynote: Adversarial Machine Learning: Robust Defenses via Synergistic Integration of Logical and Statistical Reasoning, Dr. Prateek Mittal|
|08:15-09:55||Session: Adversarial Machine Learning|
|Unicode Evil: Evading NLP Systems Using Visual Similarities of Text Characters
Authors: Antreas Dionysiou, Elias Athanasopoulos (University of Cyprus)
|Adversarial Transfer Attacks With Unknown Data and Class Overlap
Authors: Luke Richards, Andre Nguyen (University of Maryland, Baltimore County, Booz Allen Hamilton), Ryan Capps (Booz Allen Hamilton), Steven Forsyth (NVIDIA), Cynthia Matuszek (University of Maryland, Baltimore County), Edward Raff (University of Maryland, Baltimore County, Booz Allen Hamilton)
|SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing
Authors: Chawin Sitawarin (University of California, Berkeley), Supriyo Chakraborty (IBM T. J. Watson Research Center), David Wagner (UC Berkeley)
|SEAT: Similarity Encoder by Adversarial Training for Detecting Model Extraction Attack Queries
Authors: Zhanyuan Zhang (UC Berkeley), Yizheng Chen (Columbia University), David Wagner (UC Berkeley)
|NNoculation: Catching BadNets in the Wild
Authors: Akshaj Kumar Veldanda, Kang Liu, Benjamin Tan (New York University), Prashanth Krishnamurthy, Farshad Khorrami (NYU Tandon School of Engineering), Ramesh Karri (New York University), Brendan Dolan-Gavitt (NYU), Siddharth Garg (New York University)
|10:10-11:30||Session: Machine Learning for Cybersecurity|
|Network Anomaly Detection Using Transfer Learning Based on Auto-Encoders Loss Normalization
Authors: Aviv Yehezkel, Eyal Elyashiv, Or Soffer (Cynamics)
|A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels
Authors: Robert J. Joyce (University of Maryland, Baltimore County), Edward Raff (Booz Allen Hamilton), Charles Nicholas (University of Maryland Baltimore County)
|StackBERT: Machine Learning Assisted Static Stack Frame Size Recovery On Stripped and Optimized Binaries
Authors: Chinmay Deshpande, David Gens, Michael Franz (University of California, Irvine)
|Patch-based Defenses against Web Fingerprinting Attacks
Authors: Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao (University of Chicago)
|17:45-19:45||Session: Machine Learning for Cybersecurity|
|INSOMNIA: Towards Concept-Drift Robustness in Network Intrusion Detection
Authors: Giuseppina Andresini (University of Bari Aldo Moro), Feargus Pendlebury (King's College London and Royal Holloway, University of London and International Computer Science Institute), Fabio Pierazzi (King’s College London), Corrado Loglisci, Annalisa Appice (University of Bari Aldo Moro), Lorenzo Cavallaro (University College London)
|Investigating Labelless Drift Adaptation for Malware Detection
Authors: Zeliang Kan (King's College London), Feargus Pendlebury (King's College London and Royal Holloway, University of London and International Computer Science Institute), Fabio Pierazzi (King's College London), Lorenzo Cavallaro (University College London)
|Spying through Virtual Backgrounds of Video Calls
Authors: Jan Malte Hilgefort (TU Braunschweig), Daniel Arp (TU Berlin), Konrad Rieck (TU Braunschweig)
|Explaining Graph Neural Networks for Vulnerability Discovery
Authors: Tom Ganz, Martin Härterich (SAP Security Research), Alexander Warnecke, Konrad Rieck (TU Braunschweig)
|Automating Privilege Escalation with Deep Reinforcement Learning
Authors: Kalle Kujanpää (Aalto University), Willie Victor (F-Secure), Alexander Ilin (Aalto University)
|Automated Detection of Side Channels in Cryptographic Protocols: DROWN the ROBOTs!
Authors: Jan Peter Drees (University of Wuppertal), Pritha Gupta (Paderborn University), Eyke Hüllermeier (LMU Munich), Tibor Jager (University of Wuppertal), Alexander Konze, Claudia Priesterjahn (achelos GmbH), Arunselvan Ramaswamy, Juraj Somorovsky (Paderborn University)
|20:00-20:40||Session: Privacy-Preserving Machine Learning|
|FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data
Authors: Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar (IBM Research - Almaden), James Joshi (University of Pittsburgh), Heiko Ludwig (IBM Research - Almaden)
|Differential Privacy Defenses and Sampling Attacks for Membership Inference
Authors: Shadi Rahimian (CISPA Helmholtz Center for Information Security), Tribhuvanesh Orekondy (Max Planck Institute for Informatics), Mario Fritz (CISPA Helmholtz Center for Information Security)
Recent years have seen a dramatic increase in applications of artificial intelligence, machine learning, and data mining to security and privacy problems. The use of AI and ML in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. The AISec workshop, now in its 14th year, is the leading venue for presenting and discussing new developments in the intersection of security and privacy with AI and machine learning.
Topics of interest include (but are not limited to):
Theoretical topics related to security
Security-related AI problems
We invite the following types of papers:
The authors can specify the paper type in the submission form. Paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and we recommend using the ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.
Submission link: https://aisec2021.hotcrp.com.
All accepted submissions will be presented at the workshop and included in the ACM workshop proceedings.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
As in the previous editions of this workshop, we would honor outstanding contributions. To this end, we will award the best paper. The best paper will be selected by the reviewers among all the submitted papers.
In the previous edition, Edward Raff, Richard Zak, Gary Lopez Munoz (Booz Allen Hamilton); William Fleming (U.S. Navy); Hyrum Anderson (Microsoft); Bobby Filar (Elastic); Charles Nicholas (University of Maryland Baltimore County); James Holt (Laboratory for Physical Sciences) were awarded the 2020 AISec Best Paper Award for their work on “Automatic Yara Rule Generation Using Biclustering”.