12th ACM Workshop on
Artificial Intelligence and Security
November 15, 2019 — London, UK
co-located with the 26th ACM Conference on Computer and Communications Security
Photo: Pixabay

Keynote

Title: Machine Learning meets Security Engineering: What next?
Ross Anderson, University of Cambridge, UK

Abstract

A security engineer trying to build a system that incorporates one or more machine-learning models will have many questions to ask. What are the attacks that scale enough to care about? What are the hard problems to solve -- collecting data, cleaning it, understanding what signals to extract, devising a better classifier, testing it or maintaining it? When do you block, and when do you alarm? How do you manage alarms at scale? In order to give some insight into this, I'll review the history of such models in fraud detection, in intrusion detection, in spam filtering and in attacks too. I expect that the future will not be one AI to rule them all, but lots of ML components in conventional systems for which security engineering will be a whole-system matter. How can we develop more useful components? As an example, we present a new way of instrumenting computer-vision models so as to detect adversarial samples with no added run-time cost. Our Taboo Trap mechanism enables us to embed a key in a deep neural network so as to diversify models and stop attacks scaling.

Programme

Workshop Location: Belnheim Room at Hilton Metropole (see here)
09:00–09:10 Opening and Welcome
09:10–10:00 Keynote: Security Engineering, Prof. Ross Anderson
10:00–10:45 Coffee break
10:45-11:20 Session: Privacy-preserving Federated Learning
A Hybrid Approach to Privacy-Preserving Federated Learning
Authors: Stacey Truex (Georgia Institute of Technology); Nathalie Baracaldo (IBM Research Almaden); Ali Anwar (IBM Research); Thomas Steinke (IBM Research Almaden); Heiko Ludwig (IBM Almaden); Rui Zhang, Yi Zhou (IBM Research Almaden)
HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning
Authors: Runhua Xu (University of Pittsburgh); Nathalie Baracaldo, Yi Zhou, Ali Anwar, Heiko Ludwig (IBM Research - Almaden)
11:20-12:00 Session: Malware Detection
Robust Detection of Obfuscated Strings in Android Apps
Authors: Alireza Mohammadinodooshan, Ulf Kargén, Nahid Shahmehri (Linköping University)
Malware Detection on Highly Imbalanced Data through Sequence Modeling
Authors: Rajvardhan Oak, Min Du, David Yan (University of California Berkeley); Idan Amit (Palo Alto Networks)
12:00–14:00 Lunch
14:00-15:00 Session: Machine Learning for Security
Risk Prioritization by Leveraging Latent Vulnerability Features in a Contested Environment
Authors: Kenneth Alperin, Allan Wollaber, Dennis Ross, Pierre Trepagnier (MIT Lincoln Laboratory); Leslie Leonard (U.S. Army Engineer Research and Development Center)
Cross-Vendor Knowledge Transfer for Managed Security Services with Triplet Network
Authors: Toshiki Shibahara, Hirokazu Kodera, Daiki Chiba, Mitsuaki Akiyama, Kunio Hato (NTT Secure Platform Laboratories); Ola Söderström, Daniel Dalek (NTT Security); Masayuki Murata (Osaka University)
CADENCE: Conditional Anomaly Detection for Events Using Noise-Contrastive Estimation
Authors: Mohammad Ruhul Amin (Stony Brook University); Pranav Garg, Baris Coskun (Amazon Web Services)
15:00–15:45 Coffee break
15:45-16:45 Session: Adversarial Machine Learning and Adversarial Examples
Making Targeted Black-box Evasion Attacks Effective and Efficient
Authors: Mika Juuti, Buse Gul Atli, N. Asokan (Aalto University)
Interpolated Adversarial Training: Achieving Robust Neural Networks Without Sacrificing Too Much Accuracy
Authors: Alex Lamb (Montreal Institute of Learning Algorithms); Vikas verma, Juho Kannala (Aalto University); Yoshua Bengio (Montreal Institute of Learning Algorithms)
Analyzing the Robustness of Open-World Machine Learning
Authors: Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song (Princeton University); Chawin Sitawarin (University of California, Berkeley); Daniel Cullina (Princeton University); Mung Chiang (Princeton University/Purdue University); Prateek Mittal (Princeton Univ.)
16:45–17:00 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline (extended): June 29, 2019, 11:59 PM (AoE, UTC-12)
  • Reviews released: August 1, 2019 August 4, 2019
  • Author response due: August 4, 2019 August 7, 2019
  • Acceptance notification: August 7, 2019 August 10, 2019
  • Camera ready due: August 30, 2019 (hard deadline)
  • Workshop: November 15, 2019

Overview

Recent years have seen a dramatic increase in applications of artificial intelligence, machine learning, and data mining to security and privacy problems. The use of AI and ML in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. The AISec workshop, now in its 12th year, is the leading venue for presenting and discussing new developments in the intersection of security and privacy with AI and machine learning.

Topics of Interest

Topics of interest include (but are not limited to):

AI/ML Security

  • Adversarial attacks on machine learning
  • Defenses against adversarial attacks
  • Security of deep learning systems
  • Theoretical foundations of machine learning security
  • Formal verification of machine learning models
  • Differential privacy and privacy-preserving learning
  • Explainability and Fairness

AI/ML for Computer Security

  • Spam and phishing detection
  • Botnet and malware detection
  • Intrusion detection and response
  • Computer forensics
  • Data anonymization/de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

We invite the following types of papers:

  • Original research papers on any topic in the intersection of AI or machine learning with security, privacy, or related areas.
  • Position and open-problem papers discussing the relationship of AI or machine learning to security or privacy. Submitted papers of this type may not substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop proceedings.
  • Systematization-of-knowledge papers, which should distill the AI or machine learning contributions of a previously-published series of security papers.

Paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX in the new ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.

Submission Site

Submission link: https://aisec2019.hotcrp.com.

Committee

Workshop Chairs

Steering Committee

Program Committee

  • Hyrum Anderson, Endgame, Inc.
  • Sam Bretheim, Craigslist
  • Aylin Caliskan, George Washington University
  • Lorenzo Cavallaro, King's College London
  • Yizheng Chen, Columbia University
  • Clarence Chio, AI Village
  • Igino Corona, Pluribus One SRL
  • Scott Coull, FireEye
  • Ambra Demontis, University of Cagliari
  • Christos Dimitrakakis, Chalmers University / University of Lille / Harvard University
  • Brendan Dolan-Gavitt, NYU
  • Milenko Drinic, Microsoft
  • Markus Duermuth, Ruhr Uni­ver­si­ty Bo­chum
  • David Freeman, Facebook, Inc.
  • Sakshi Jain, LinkedIn
  • Pavel Laskov, University of Liechtenstein
  • Daniel Lowd, University of Oregon
  • Lei Ma, Kyushu University
  • Davide Maiorca, University of Cagliari
  • Pratyusa K Manadhata, Hewlett-Packard Laboratories
  • Brad Miller, Google
  • Aikaterini Mitrokotsa, Chalmers University of Technology
  • Luis Muñoz-González, Imperial College London
  • Blaine Nelson, Google
  • Alina Oprea, NorthEastern University
  • Rebekah Overdorf, EPFL
  • Aleatha Parker-Wood, University of California
  • Fabio Pierazzi, King's College London
  • Konrad Rieck, TU Braunschweig
  • Benjamin Rubinstein, The University of Melbourne
  • Yevgeniy Vorobeychik, Vanderbilt University
  • Gang Wang, Virginia Tech
  • Armin Wasicek, Avast Security