Abstract
In this talk, I'll discuss ways that adversarial methods can be used to attack dataset privacy. Unlike conventional "adversarial attacks" that seek to flip the label that comes out of a neural network, I'll present a slate of adversarial methods that can be used to extract training data from models or force federated learning systems to leak the personal training data of their clients via publicly visible gradient updates. I'll also discuss how adversarial methods can be used to de-incentivize the scraping of personal data for unauthorized use by creating unlearnable datasets that cannot be used for model training even if they are maliciously obtained.
Biography
Tom Goldstein is the Perotto Associate Professor of Computer Science at the University of Maryland. His research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. Before joining the faculty at Maryland, Tom completed his PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. Professor Goldstein has been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, a JP Morgan Faculty award, and a Sloan Fellowship.
09:00–9:10 | Opening and Welcome |
9:10–10:00 | Keynote: Protecting (and breaking) dataset privacy in computer vision and federated learning, Prof. Tom Goldstein |
10:00–10:20 | Coffee break |
10:20-:12:00 | Session: Privacy-Preserving Machine Learning |
Label-Only Membership Inference Attack against Node-Level Graph Neural Networks
Authors: Mauro Conti (University of Padua & Delft University of Technology), Jiaxin Li (University of Padua), Stjepan Picek (Radboud University & Delft University of Technology), Jing Xu (Delft University of Technology) |
|
Repeated Knowledge Distillation with Confidence Masking to Mitigate Membership Inference Attacks
Authors: Federico Mazzone, Leander van den Heuvel, Maximilian Huber, Cristian Verdecchia, Maarten Everts, Florian Hahn (University of Twente, The Netherlands), Andreas Peter (University of Oldenburg, Germany) |
|
Forgeability and Membership Inference Attacks
Authors: Zhifeng Kong, Amrita Roy Chowdhury, Kamalika Chaudhuri(UCSD) |
|
PROV-FL: Privacy-preserving Round Optimal Verifiable Federated Learning
Authors: Vishnu Asutosh Dasu (The Pennsylvania State University), Sumanta Sarkar (University of Warwick), Kalikinkar Mandal (University of New Brunswick) |
|
Inferring Class-Label Distribution in Federated Learning
Authors: Raksha Ramakrishna, György Dán (KTH Royal Institute of Technology) |
|
Introduction to the Second Round of a Machine Learning Model Attribution Challenge
Authors: Deepesh Chaudhari (Plaintext Group), Hyrum S. Anderson (Robust Intelligence), Keith Manville (MITRE), Lily Wong (MITRE); Yonadav Shavit (Plaintext Group) |
|
12:15–14:00 | Lunch |
14:00-14:40 | Session 2A: Adversarial Machine Learning |
Video is All You Need: Attacking PPG-based Biometric Authentication
Authors: Lin Li (Swinburne University of Technology), Chao Chen (RMIT University), Lei Pan (Deakin University, Australia), Jun Zhang, Yang Xiang (Swinburne University of Technology) |
|
Magnitude Adversarial Spectrum Search-based Black-box Attack against Image Classification
Authors: Kim A. B. Midtlid, Johannes Åsheim, Jingyue Li (Norwegian University of Science and Technology) |
|
14:40–15:00 | Break |
15:00-16:00 | Session2B: Adversarial Machine Learning |
Assessing the Impact of Transformations on Physical Adversarial Attacks
Authors: Paul-Andrei Sava, Jan-Philipp Schulze, Philip Sperl, Konstantin Böttinger (Fraunhofer AISEC) |
|
Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation
Authors: Tong Wu, Tianhao Wang, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal (Princeton University) |
|
Proactive Detection of Query-based Adversarial Scenarios in NLP Systems
Authors: Mohammad Maghsoudimehrabani, Amin Azmoodeh, Ali Dehghantanha, Behrouz Zolfaghari (University of Guelph), Gautam Srivastava (Brandon University) |
|
16:00-17:40 | Session: Machine Learning for Cybersecurity |
Context-based Clustering to Mitigate Phishing Attacks
Authors: Tarini Saka, Kami Vaniea, Nadin Kokciyan (University of Edinburgh) |
|
Quo Vadis: Hybrid Machine Learning Meta-Model based on Contextual and Behavioral Malware Representations
Authors: Dmitrijs Trizna (Microsoft) |
|
Optimising Vulnerability Triage in DAST with Deep Learning
Authors: Stuart Millar (Rapid7 LLC), Denis Podgurskii (OWASP), Dan Kuykendall (Rapid7 LLC), Jesus Martinez del Rincon, Paul Miller (Centre for Secure Information Technologies, Queen's University Belfast) |
|
Bridging Automated to Autonomous Cyber Defense: Foundational Analysis of Tabular Q-Learning
Authors: Andy Applebaum, Camron Dennler, Patrick Dwyer, Marina Moskowitz, Harold Nguyen, Nicole Nichols, Nicole Park, Paul Rachwalski, Frank Rau, Adrian Webster, Melody Wolk (Apple) |
|
17:40–18:00 | Closing remarks |
Recent years have seen a dramatic increase in applications of artificial intelligence, machine learning, and data mining to security and privacy problems. The use of AI and ML in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. The AISec workshop, now in its 15th year, is the leading venue for presenting and discussing new developments in the intersection of security and privacy with AI and machine learning.
Topics of interest include (but are not limited to):
Theoretical topics related to security
Security applications
Security-related AI problems
We invite the following types of papers:
The authors can specify the paper type in the submission form. Paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and we recommend using the ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.
Submission link: https://aisec2022.hotcrp.com.
All accepted submissions will be presented at the workshop and included in the ACM workshop proceedings.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings. NB: They do not have to attend the workshop in person. They can present their paper remotely.
For any questions, please contact one the workshop organizers at [email protected]
As in the previous editions of this workshop, we would honor outstanding contributions. To this end, we will award the best paper. The best paper will be selected by the reviewers among all the submitted papers.
In the previous edition, Tom Ganz, Martin Härterich (SAP Security Research); Alexander Warnecke, Konrad Rieck (TU Braunschweig) were awarded the 2021 AISec Best Paper Award for their work on “Explaining Graph Neural Networks for Vulnerability Discovery”.