Alina Oprea is a Professor at Northeastern University in the Khoury College of Computer Sciences. She joined Northeastern University in Fall 2016 after spending 9 years as a research scientist at RSA Laboratories. Her research interests in cyber security are broad, with a focus on AI security and privacy, ML-based threat detection, cloud security, and applied cryptography. She is the recipient of the Technology Review TR35 award for her research in cloud security in 2011, the Google Security and Privacy Award in 2019, the Ruth and Joel Spira Award for Excellence in Teaching in 2020, and the CMU Cylab Distinguished Alumni Award in 2024. Alina served as Program Committee co-chair of the flagship cyber security conference, the IEEE Security and Privacy Symposium in 2020 and 2021. She also served as Associate Editor of the ACM Transactions of Privacy and Security( TOPS) journal and the IEEE Security and Privacy Magazine. Her work was recognized with Best Paper Awards at NDSS 2005, AISEC in 2017, and GameSec in 2019..
In the last few years, we have seen tremendous progress on the capabilities of generative AI and large language models (LLMs). As model sizes have reached hundreds of billions of parameters, training models from scratch has become infeasible. Consequently, system developers typically leverage pre-trained LLMs, and later fine-tune them or augment them with external content to specialize them to new tasks. In this talk, we pose the question if these complex LLM deployment pipelines introduce new security and privacy risks for users. We discuss a new privacy attack on fine-tuned LLMs and a new poisoning attack for LLMs utilizing Retriever Augmented Generation (RAG). We also discuss the challenges of developing mitigations and highlight several open problems in securing AI systems.
Lea Schönherr is a tenure-track faculty at CISPA Helmholtz Center for Information Security since 2022. Her research focuses on information security, particularly adversarial machine learning, trustworthy generative AI, and ML security applications. She is especially interested in language as an interface to machine learning models, including their cognitive representations and code generation with LLMs. She has published several papers on threat detection and defense of speech recognition systems, generative models, and preventing the misuse of generative AI. She obtained her PhD from Ruhr-Universität Bochum, Germany, in 2021 and is a recipient of two fellowships from UbiCrypt (DFG Graduate School) and Casa (DFG Cluster of Excellence).
Generative AI (genAI) is becoming more integrated into our daily lives, raising questions about potential threats within genAI systems and their outputs. In this talk, we will examine the resulting challenges and security threats associated with generative AI. In the first part of the talk, we look at threat scenarios in which generative models are utilized to produce content that is impossible to distinguish from human-generated content. This fake content is often used for fraudulent and manipulative purposes. As generative models evolve, the attacks are easier to automate and require less expertise, while detecting such activities will become increasingly difficult. This talk will provide an overview of our current challenges in detecting fake media in human and machine interactions. The second part will cover exploits of LLMs to disrupt alignment or to steal sensitive information. Existing attacks show that content filters of LLMs can be easily bypassed with specific inputs and that private information can be leaked. Also, established methods in the adversarial machine learning field cannot be easily transferred to generative models. From an alternative perspective, we show that an alternative way to protect intellectual property is by obfuscating prompts. We demonstrate that with only some overhead, we can achieve similar utility while protecting confidential data. The final part of the presentation will discuss the use of generative models in security applications. This includes benchmarking and fixing vulnerable code and understanding these models' capabilities by investigating their code deobfuscation abilities.
SlidesGiovanni Cherubin is a Senior Researcher at Microsoft in Cambridge, working with the Microsoft Security Response Centre (MSRC). Before joining Microsoft, Giovanni held research positions at the Alan Turing Institute and EPFL. He obtained a PhD in Machine Learning and Cyber Security from Royal Holloway University of London. His research focuses on the privacy and security properties of machine learning models, as well as the theoretical and empirical study of their information leakage. Additionally, Giovanni works on distribution-free uncertainty estimation for machine learning, such as Conformal Prediction. He has received multiple awards for his contributions to security, privacy, and distribution-free inference.
Privacy-preserving Machine Learning (PPML) has the rare privilege among security research fields of having defences that are both practical and theoretically robust, thanks to over 20 years of progress. However, deployment of these defenses often sparks heated debates over how to tune their parameters. This is partially because these defences are typically designed to counter "any" attack, which can lead to overlooking the specific threats relevant to a particular deployment. This talk will cover the key advancements in PPML research through the principle of "first consider the threats, then pick a defence." By deliberately defining which attacks we consider to be a threat (and which ones we don't) before deploying a model, we can more effectively select concrete parameters for our defences, and better communicate the extent and limitations of the protection we've achieved.
09:00–9:15 | Opening and Welcome |
9:15–10:00 | Keynote 1 |
On the Security and Privacy Risks of Generative AI Systems
Alina Oprea , Professor @ Northeastern University |
|
10:00-10:30 | Spotlights |
Efficient Model Extraction via Boundary Sampling
Authors : Maor Biton Dor (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University) |
|
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Authors : Dario Pasquini (George Mason University), Martin Strohmeier (Cyber-Defence Campus, armasuisse Science + Technology), Carmela Troncoso (EPFL) |
|
Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients
Authors : Nadav Gat (Tel Aviv University), Mahmood Sharif (Tel Aviv University) |
|
Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models
Authors : Alberto Castagnaro (Delft University of Technology, The Netherlands), Mauro Conti (University of Padua, Italy), Luca Pajola (University of Padua, Italy) |
|
10:30–11:00 | Coffee break |
11:00–12:00 | Poster session 1 - Skylight room (2nd floor) |
12:00–13:30 | Lunch |
13:30–14:15 |
Keynote 2
|
Challenges and Threats in Generative AI: Misuse and Exploits
Lea Schönherr , Tenure-track Faculty @ CISPA Helmholtz Center for Information Security |
|
14:15–15:00 |
Keynote 3
|
A threat-centric look at Privacy-Preserving Machine Learning
Giovanni Cherubin , Senior Researcher @ Microsoft |
|
15:00–15:30 | Coffee break |
15:30–16:30 | Poster session 2 - Grand Ballroom E |
16:30–16:45 | Closing remarks |
As in the previous editions of this workshop, we would honor outstanding contributions. To this end, we will award the best paper, selected by the reviewers among all the submitted papers.
The 2024 AISec Best Paper Award was given to:
Maor Biton Dor
(Ben-Gurion University),
Yisroel Mirsky
(Ben-Gurion University),
for the paper
Efficient Model Extraction via Boundary Sampling
.
Semantic Stealth: Crafting Covert Adversarial Patches for Sentiment Classifiers Using Large Language Models
Authors : Camila Roa (Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN), Maria Mahbub (Center for Artificial Intelligence Security Research, Oak Ridge National Laboratory, Oak Ridge, TN), Sudarshan Srinivasan (Center for Artificial Intelligence Security Research, Oak Ridge National Laboratory, Oak Ridge, TN), Edmon Begoli (Center for Artificial Intelligence Security Research, Oak Ridge National Laboratory, Oak Ridge, TN), Amir Sadovnik (Center for Artificial Intelligence Security Research, Oak Ridge National Laboratory, Oak Ridge, TN) |
|
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness
Authors : Jiankai Jin (The University of Melbourne), Olga Ohrimenko (The University of Melbourne), Benjamin I. P. Rubinstein (The University of Melbourne) |
|
On the Robustness of Graph Reduction Against GNN Backdoor
Authors : Yuxuan Zhu (Rensselaer Polytechnic Institute), Michael Mandulak (Rensselaer Polytechnic Institute), Kerui Wu (Rensselaer Polytechnic Institute), George Slota (Rensselaer Polytechnic Institute), Yuseok Jeon (Ulsan National Institute of Science and Technology), Ka-Ho Chow (The University of Hong Kong), Lei Yu (Rensselaer Polytechnic Institute) |
|
Adversarially Robust Anti-Backdoor Learning
Authors : Qi Zhao (Karlsruhe Institute of Technology (KIT)), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)) |
|
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Authors : Dario Pasquini (George Mason University), Martin Strohmeier (Cyber-Defence Campus, armasuisse Science + Technology), Carmela Troncoso (EPFL) Video |
|
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training
Authors : Leo Hyun Park (Yonsei University), Jaeuk Kim (Yonsei University), Myung Gyo Oh (Yonsei University), Jaewoo Park (Yonsei University), Taekyoung Kwon (Yonsei University) |
|
The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations
Authors : Zebin Yun (Tel Aviv University), Achi-Or Weingarten (Weizmann Institute of Science), Eyal Ronen (Tel Aviv University), Mahmood Sharif (Tel Aviv University) Poster |
|
ELMs Under Siege: A Study on Backdoor Attacks on Extreme Learning Machines
Authors : Behrad Tajalli (Radboud University), Stefanos Koffas (TU Delft), Gorka Abad (Radboud University & Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA)), Stjepan Picek (Radboud University) |
|
EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody
Authors : Coen Schoof (Radboud University), Stefanos Koffas (Delft University of Technology), Mauro Conti (University of Padua), Stjepan Picek (Radboud University) |
Efficient Model Extraction via Boundary Sampling
Authors : Maor Biton Dor (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University) |
|
Feature Selection from Differentially Private Correlations
Authors : Ryan Swope (Booz Allen Hamilton), Amol Khanna (Booz Allen Hamilton), Philip Doldo (Booz Allen Hamilton), Saptarshi Roy (University of Michigan, Ann Arbor), Edward Raff (Booz Allen Hamiltion) Poster |
|
It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss
Authors : Meenatchi Sundaram Muthu Selva Annamalai ([email protected]) |
|
Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients
Authors : Nadav Gat (Tel Aviv University), Mahmood Sharif (Tel Aviv University) Poster |
When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS
Authors : Giovanni Apruzzese (University of Liechtenstein), Aurore Fass (CISPA Helmholtz Center for Information Security), Fabio Pierazzi (King's College London) |
|
Towards Robust, Explainable, and Privacy-Friendly Sybil Detection
Authors : Christian Bungartz (University of Bonn), Dr. Felix Boes (University of Bonn), Prof. Dr. Michael Meier (University of Bonn, Fraunhofer FKIE), Dr. Marc Ohm (University of Bonn, Fraunhofer FKIE) |
|
Using LLM Embeddings with Similarity Search for Botnet TLS Certificate Detection
Authors : Kumar Shashwat (University of South Florida), Francis Hahn (University of South Florida), Stuart Millar (Rapid7 LLC), Xinming Ou (University of South Florida) Poster |
|
Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models
Authors : Alberto Castagnaro (Delft University of Technology, The Netherlands), Mauro Conti (University of Padua, Italy), Luca Pajola (University of Padua, Italy) |
|
Music to My Ears: Turning GPU Sounds into Intellectual Property Gold
Authors : Sayed Erfan Arefin (Texas Tech University), Abdul Serwadda (Texas Tech University) Poster Video |
Recent years have seen a dramatic increase in applications of Artificial Intelligence (AI), Machine Learning (ML), and data mining to security and privacy problems. The analytic tools and intelligent behavior provided by these techniques make AI and ML increasingly important for autonomous real-time analysis and decision making in domains with a wealth of data or that require quick reactions to constantly changing situations. The use of learning methods in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of “deep learning” techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. In addition, data mining and machine learning techniques create a wealth of privacy issues, due to the abundance and accessibility of data. The AISec workshop provides a venue for presenting and discussing new developments in the intersection of security and privacy with AI and ML.
Topics of interest include (but are not limited to):
Theoretical topics related to security
Security applications
Security-related AI problems
We invite the following types of papers:
The authors can specify the paper type in the submission form. Paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and we recommend using the ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template . Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.
Submission link: https://aisec2024.hotcrp.com .
All accepted submissions will be presented at the workshop as posters. Accepted papers will be selected for presentation as spotlights based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance and will be included in the ACM workshop proceedings.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
For any questions, please contact one the workshop organizers at [email protected]
Thanks for those who contacted us to help with the reviews!