Submission link: TBD .
Applications of Artificial Intelligence (AI), Machine Learning (ML), and data mining to security and privacy problems are ubiquitous. The analytic tools and intelligent behavior provided by AI/ML models is now pivotal for autonomous real-time analysis and decision making in domains with a wealth of data or that require quick reactions to constantly changing situations. The use of learning-based methods in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The acknowledged widespread adoption of deep learning techniques, such as Large-Language Models (LLM), whose security properties are difficult to reason about directly, has only added to the importance of this research.
In addition, and unfortunately, the application of AI/ML techniques often leads to privacy-related issues, such as the possibility to extract sensitive data from trained models, as well as training on sensitive or confidential data. The AISec workshop provides a venue for presenting and discussing new developments in the intersection of security and privacy with AI and ML.
Topics of interest include (but are not limited to):
Theoretical topics related to security and privacy
Security applications of AI/ML
Security/Privacy-related AI problems
AI Safety and Trustworthiness
We invite the following types of papers, all of which are of archival nature and, hence, cannot substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop with proceedings.
Papers not following the following guidelines will be desk-rejected. Submissions must be in English and properly anonymized. Ensure that there is no way to identify authors, including, e.g., when linking code repositories (consider using anonymous.4open.science), when referencing prior work of the same authors, or in the header of the pages. The papers should be at most 10 pages long in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Committee members are not required to read appendices, so the paper should be intelligible without them. Papers must be submitted as PDF files, and must strictly comply with the ACM format. We point the authors to the main ACM CCS'26 formatting instructions (except with page limits as described above), which can be found here. Benchmark papers without a valid (and functional) link to a resource containing the benchmark will be rejected. Submitted papers must also comply with the "Use of Generative AI" policy described below. Accepted papers will be published in the ACM Digital Library and/or ACM Press.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings. All accepted submissions will be presented at the workshop as posters. Accepted papers will be selected for presentation as spotlights based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance and will be included in the ACM workshop proceedings.
All submissions must have an explicit paragraph (which does not count towards the page limit, and to be put before the references/appendices) declaring whether, and the extent of, Generative AI tools (such as LLMs) has been used in the making of the research paper. This means that even if no AI tools have been used, it should be explicitly stated "No AI tools have been used for carrying out this research". Regardless, using AI tools during the creation of a research article must follow the official ACM policies (described here).
At the same time, we will also require all reviewers to explicitly declare if Generative AI tools (such as LLMs) has been used in the reviewing process. We expect reviewers to act responsibly, and in ways that comply with the official ACM policies in this regard.
For any questions, feel free to contact the workshop organizers!
We are currently looking for reviewers. Contact [email protected] if you want to be involved.