19 th ACM Workshop on
Artificial Intelligence and Security
November 15-19th, 2026 - The Hague, Netherlands
co-located with the 33rd ACM Conference on Computer and Communications Security
Photo: Originally posted to Flickr The Hague. (License: CC BY 2.0 )

Call for Papers

Important Dates (all deadlines are 23:59 / 11:59pm AoE)

  • Paper submission deadline: July 25th, 2026 (firm) [ to go]
  • Reviews due: September 1st, 2026
  • Review Released and Acceptance notification: September 3rd, 2026
  • Camera ready due: September 16th, 2026 (firm)
  • Workshop day: TBD

Submission Site

Submission link: TBD .

Scope of AISec

Applications of Artificial Intelligence (AI), Machine Learning (ML), and data mining to security and privacy problems are ubiquitous. The analytic tools and intelligent behavior provided by AI/ML models is now pivotal for autonomous real-time analysis and decision making in domains with a wealth of data or that require quick reactions to constantly changing situations. The use of learning-based methods in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The acknowledged widespread adoption of deep learning techniques, such as Large-Language Models (LLM), whose security properties are difficult to reason about directly, has only added to the importance of this research.
In addition, and unfortunately, the application of AI/ML techniques often leads to privacy-related issues, such as the possibility to extract sensitive data from trained models, as well as training on sensitive or confidential data. The AISec workshop provides a venue for presenting and discussing new developments in the intersection of security and privacy with AI and ML.

Topics of Interest

Topics of interest include (but are not limited to):

Theoretical topics related to security and privacy

  • Adversarial ML
  • Security of AI/ML-systems
  • Robust statistics
  • Economics of AI/ML-based security
  • Differential privacy

Security applications of AI/ML

  • Computer forensics
  • Spam and Phishing detection, prevention, and response
  • (Network) Intrusion detection and response
  • Malware identification and analysis
  • Data anonymization/de-anonymization
  • Security and Privacy on the Web
  • Extraction of Cyber-threat Intelligence
  • Security and Privacy in data markets

Security/Privacy-related AI problems

  • Distributed inference and decision making for security
  • Secure multiparty computation and cryptographic approaches
  • Model confidentiality
  • Privacy-preserving data mining
  • Adaptive side-channel attacks
  • Design and analysis of CAPTCHAs
  • Safe and Usable applications of AI
  • Human factors in AI/ML and Security/Privacy
  • AI approaches to trust and reputation
  • Vulnerability testing through intelligent probing (e.g. fuzzing)
  • Content-driven security policy management & access control
  • AI Misuse (e.g., Large Language Models for automated hacking, misinformation, deepfakes)

AI Safety and Trustworthiness

  • Alignment failure modes and specification gaming
  • Scalale oversight and human-in-the-loop supervision
  • Dangerous capability evaluations
  • Red-teaming and stress-testing of foundatoin models
  • Deceptive alignment and sandbagging
  • Multi-agent coordination risks
  • Containment and guardrails for autonomous agents
  • Truthfulness, calibration, and hallucination mitigation
  • Societal-scale risks from AI misues and proliferation
  • AI Ethics

Submission Guidelines

We invite the following types of papers, all of which are of archival nature and, hence, cannot substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop with proceedings.

  • Original research papers on any topic in the intersection of AI or machine learning with security, privacy, or related areas.
  • Position and open-problem papers discussing the relationship of AI or machine learning to security or privacy, advocating for new perspectives, or providing a critical look at prior research.
  • Systematization-of-knowledge papers, which should distill the AI or machine learning contributions of a previously-published series of security papers.
  • [NEW] Benchmark papers proposing a new benchmark in the context of AI/ML and security/privacy. (Note: artifact sharing at submission time is required for papers in this category.)
The authors can specify the paper type in the submission form. We encourage authors to share their artifacts at submission time by providing a link to an (anonymous) repository. We also endorse authors to reflect on the ethical implications of their research, which can be discussed in the paper in a dedicated section or paragraph.

Papers not following the following guidelines will be desk-rejected. Submissions must be in English and properly anonymized. Ensure that there is no way to identify authors, including, e.g., when linking code repositories (consider using anonymous.4open.science), when referencing prior work of the same authors, or in the header of the pages. The papers should be at most 10 pages long in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Committee members are not required to read appendices, so the paper should be intelligible without them. Papers must be submitted as PDF files, and must strictly comply with the ACM format. We point the authors to the main ACM CCS'26 formatting instructions (except with page limits as described above), which can be found here. Benchmark papers without a valid (and functional) link to a resource containing the benchmark will be rejected. Submitted papers must also comply with the "Use of Generative AI" policy described below. Accepted papers will be published in the ACM Digital Library and/or ACM Press.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings. All accepted submissions will be presented at the workshop as posters. Accepted papers will be selected for presentation as spotlights based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance and will be included in the ACM workshop proceedings.

Use of Generative AI [NEW]

All submissions must have an explicit paragraph (which does not count towards the page limit, and to be put before the references/appendices) declaring whether, and the extent of, Generative AI tools (such as LLMs) has been used in the making of the research paper. This means that even if no AI tools have been used, it should be explicitly stated "No AI tools have been used for carrying out this research". Regardless, using AI tools during the creation of a research article must follow the official ACM policies (described here).
At the same time, we will also require all reviewers to explicitly declare if Generative AI tools (such as LLMs) has been used in the reviewing process. We expect reviewers to act responsibly, and in ways that comply with the official ACM policies in this regard.

For any questions, feel free to contact the workshop organizers!

Committee

Workshop Chairs

Steering Committee

Program Committee

Reviewers
  • TBD
  • TBD

We are currently looking for reviewers. Contact [email protected] if you want to be involved.