15th ACM Workshop on
Artificial Intelligence and Security
November 11, 2022 — Hybrid Event (Los Angeles, U.S.A. + online)
co-located with the 29th ACM Conference on Computer and Communications Security
Photo: Pixabay

Keynote

Title: Protecting (and breaking) dataset privacy in computer vision and federated learning
Tom Goldstein, University of Maryland, USA

Abstract

In this talk, I'll discuss ways that adversarial methods can be used to attack dataset privacy. Unlike conventional "adversarial attacks" that seek to flip the label that comes out of a neural network, I'll present a slate of adversarial methods that can be used to extract training data from models or force federated learning systems to leak the personal training data of their clients via publicly visible gradient updates. I'll also discuss how adversarial methods can be used to de-incentivize the scraping of personal data for unauthorized use by creating unlearnable datasets that cannot be used for model training even if they are maliciously obtained.

Biography

Tom Goldstein is the Perotto Associate Professor of Computer Science at the University of Maryland. His research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. Before joining the faculty at Maryland, Tom completed his PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. Professor Goldstein has been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, a JP Morgan Faculty award, and a Sloan Fellowship.

Programme

The following times are on PDT (UTC -7).

09:00–9:10 Opening and Welcome
9:10–10:00 Keynote: Protecting (and breaking) dataset privacy in computer vision and federated learning, Prof. Tom Goldstein
10:00–10:20 Coffee break
10:20-:12:00 Session: Privacy-Preserving Machine Learning
Label-Only Membership Inference Attack against Node-Level Graph Neural Networks
Authors: Mauro Conti (University of Padua & Delft University of Technology), Jiaxin Li (University of Padua), Stjepan Picek (Radboud University & Delft University of Technology), Jing Xu (Delft University of Technology)
Repeated Knowledge Distillation with Confidence Masking to Mitigate Membership Inference Attacks
Authors: Federico Mazzone, Leander van den Heuvel, Maximilian Huber, Cristian Verdecchia, Maarten Everts, Florian Hahn (University of Twente, The Netherlands), Andreas Peter (University of Oldenburg, Germany)
Forgeability and Membership Inference Attacks
Authors: Zhifeng Kong, Amrita Roy Chowdhury, Kamalika Chaudhuri(UCSD)
PROV-FL: Privacy-preserving Round Optimal Verifiable Federated Learning
Authors: Vishnu Asutosh Dasu (The Pennsylvania State University), Sumanta Sarkar (University of Warwick), Kalikinkar Mandal (University of New Brunswick)
Inferring Class-Label Distribution in Federated Learning
Authors: Raksha Ramakrishna, György Dán (KTH Royal Institute of Technology)
Introduction to the Second Round of a Machine Learning Model Attribution Challenge
Authors: Deepesh Chaudhari (Plaintext Group), Hyrum S. Anderson (Robust Intelligence), Keith Manville (MITRE), Lily Wong (MITRE); Yonadav Shavit (Plaintext Group)
12:15–14:00 Lunch
14:00-14:40 Session 2A: Adversarial Machine Learning
Video is All You Need: Attacking PPG-based Biometric Authentication
Authors: Lin Li (Swinburne University of Technology), Chao Chen (RMIT University), Lei Pan (Deakin University, Australia), Jun Zhang, Yang Xiang (Swinburne University of Technology)
Magnitude Adversarial Spectrum Search-based Black-box Attack against Image Classification
Authors: Kim A. B. Midtlid, Johannes Åsheim, Jingyue Li (Norwegian University of Science and Technology)
14:40–15:00 Break
15:00-16:00 Session2B: Adversarial Machine Learning
Assessing the Impact of Transformations on Physical Adversarial Attacks
Authors: Paul-Andrei Sava, Jan-Philipp Schulze, Philip Sperl, Konstantin Böttinger (Fraunhofer AISEC)
Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation
Authors: Tong Wu, Tianhao Wang, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal (Princeton University)
Proactive Detection of Query-based Adversarial Scenarios in NLP Systems
Authors: Mohammad Maghsoudimehrabani, Amin Azmoodeh, Ali Dehghantanha, Behrouz Zolfaghari (University of Guelph), Gautam Srivastava (Brandon University)
16:00-17:40 Session: Machine Learning for Cybersecurity
Context-based Clustering to Mitigate Phishing Attacks
Authors: Tarini Saka, Kami Vaniea, Nadin Kokciyan (University of Edinburgh)
Quo Vadis: Hybrid Machine Learning Meta-Model based on Contextual and Behavioral Malware Representations
Authors: Dmitrijs Trizna (Microsoft)
Optimising Vulnerability Triage in DAST with Deep Learning
Authors: Stuart Millar (Rapid7 LLC), Denis Podgurskii (OWASP), Dan Kuykendall (Rapid7 LLC), Jesus Martinez del Rincon, Paul Miller (Centre for Secure Information Technologies, Queen's University Belfast)
Bridging Automated to Autonomous Cyber Defense: Foundational Analysis of Tabular Q-Learning
Authors: Andy Applebaum, Camron Dennler, Patrick Dwyer, Marina Moskowitz, Harold Nguyen, Nicole Nichols, Nicole Park, Paul Rachwalski, Frank Rau, Adrian Webster, Melody Wolk (Apple)
17:40–18:00 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline: June 24 July 20, 2022, 11:59 PM (AoE, UTC-12)
  • Reviews released: July 25 August 29, 2022
  • Author response due: August 01 September 1, 2022
  • Acceptance notification: August 05 September 05, 2022
  • Camera ready due: September 05 September 16, 2022
  • Workshop: November 11, 2022

Overview

Recent years have seen a dramatic increase in applications of artificial intelligence, machine learning, and data mining to security and privacy problems. The use of AI and ML in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. The AISec workshop, now in its 15th year, is the leading venue for presenting and discussing new developments in the intersection of security and privacy with AI and machine learning.

Topics of Interest

Topics of interest include (but are not limited to):

Theoretical topics related to security

  • Adversarial learning
  • Security of deep learning systems
  • Robust statistics
  • Learning in games
  • Economics of security
  • Differential privacy

Security applications

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification and analysis
  • Data anonymization/de-anonymization
  • Security in social networks
  • Big data analytics for security
  • User authentication

Security-related AI problems

  • Distributed inference and decision making for security
  • Secure multiparty computation and cryptographic approaches
  • Privacy-preserving data mining
  • Adaptive side-channel attacks
  • Design and analysis of CAPTCHAs
  • AI approaches to trust and reputation
  • Vulnerability testing through intelligent probing (e.g. fuzzing)
  • Content-driven security policy management & access control
  • Techniques and methods for generating training and test sets
  • Anomalous behavior detection (e.g. for the purpose of fraud detection)
  • Model confidentiality

Submission Guidelines

We invite the following types of papers:

  • Original research papers on any topic in the intersection of AI or machine learning with security, privacy, or related areas.
  • Position and open-problem papers discussing the relationship of AI or machine learning to security or privacy. Submitted papers of this type may not substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop proceedings.
  • Systematization-of-knowledge papers, which should distill the AI or machine learning contributions of a previously-published series of security papers.

The authors can specify the paper type in the submission form. Paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and we recommend using the ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.

Submission Site

Submission link: https://aisec2022.hotcrp.com.

All accepted submissions will be presented at the workshop and included in the ACM workshop proceedings.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings. NB: They do not have to attend the workshop in person. They can present their paper remotely.

For any questions, please contact one the workshop organizers at [email protected]

Best Paper Award

As in the previous editions of this workshop, we would honor outstanding contributions. To this end, we will award the best paper. The best paper will be selected by the reviewers among all the submitted papers.

In the previous edition, Tom Ganz, Martin Härterich (SAP Security Research); Alexander Warnecke, Konrad Rieck (TU Braunschweig) were awarded the 2021 AISec Best Paper Award for their work on “Explaining Graph Neural Networks for Vulnerability Discovery”.

Committee

Workshop Chairs

Steering Committee

Program Committee

    Arjun Nitin Bhagoji Bhagoji, University of Chicago
  • Armin Wasicek, Avast Inc / Technical University Vienna
  • Benjamin Ampel, The University of Arizona
  • Bobby Filar, Sublime Security
  • Brad Miller, Google
  • Chawin Sitawarin, University of California
  • Christian Wressnegger, Karlsruhe Institute of Technology
  • Christos Dimitrakakis, University of Oslo
  • Clarence Chio, UC Berkley
  • Daniel Arp, Technische Universität Braunschweig
  • Daniel Lowd, University of Oregon
  • Davide Maiorca, University of Cagliari
  • Eric Wong, MIT
  • Erwin Quiring, Ruhr-Universität Bochum
  • Fabio Pierazzi, King's College London
  • Feargus Pendlebury, King’s College London and ICSI; UC Berkeley
  • Giovanni Apruzzese, University of Liechtenstein
  • Guillermo Suarez-Tangil, IMDEA Networks Institute and King’s College London
  • Hyrum Anderson, Microsoft
  • Ilia Shumailov, University of Cambridge
  • Kexin Pei, Columbia University
  • Konrad Rieck, TU Braunschweig
  • Lei Ma, University of Alberta
  • Luca Demetrio, University of Cagliari
  • Luis Muñoz-González, Imperial College London
  • Markus Duermuth, Leibniz University Hannover
  • Matthew Jagielski, Northeastern University
  • Matthew Mirman, ETH
  • Maura Pintor, University of Cagliari
  • Milenko Drinic, Microsoft
  • Pavel Laskov, University of Liechtenstein
  • Sadia Afroz, ICSI/Avast
  • Sagar Samtani, Indiana University
  • Sam Bretheim, Craigslist
  • Sanghyun Hong, Oregon State University
  • Scott Coull, FireEye
  • Shawn Shan, University of Chicago
  • Shiqi Wang, Columbia University
  • Stefano Traverso, Ermes
  • Tianhao Wang, Caniege Mellon University
  • Weilin Xu, Intel Labs
  • Wenbo Guo, Purdue University
  • Yang Zhang, CISPA Helmholtz Center for Information Security
  • Yevgeniy Vorobeychik, Washington University in St. Louis