Abstract
A security engineer trying to build a system that incorporates one or more machine-learning models will have many questions to ask. What are the attacks that scale enough to care about? What are the hard problems to solve -- collecting data, cleaning it, understanding what signals to extract, devising a better classifier, testing it or maintaining it? When do you block, and when do you alarm? How do you manage alarms at scale? In order to give some insight into this, I'll review the history of such models in fraud detection, in intrusion detection, in spam filtering and in attacks too. I expect that the future will not be one AI to rule them all, but lots of ML components in conventional systems for which security engineering will be a whole-system matter. How can we develop more useful components? As an example, we present a new way of instrumenting computer-vision models so as to detect adversarial samples with no added run-time cost. Our Taboo Trap mechanism enables us to embed a key in a deep neural network so as to diversify models and stop attacks scaling.09:00–09:10 | Opening and Welcome |
09:10–10:00 | Keynote: Security Engineering, Prof. Ross Anderson |
10:00–10:45 | Coffee break |
10:45-11:20 | Session: Privacy-preserving Federated Learning |
A Hybrid Approach to Privacy-Preserving Federated Learning
Authors: Stacey Truex (Georgia Institute of Technology); Nathalie Baracaldo (IBM Research Almaden); Ali Anwar (IBM Research); Thomas Steinke (IBM Research Almaden); Heiko Ludwig (IBM Almaden); Rui Zhang, Yi Zhou (IBM Research Almaden) |
|
HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning
Authors: Runhua Xu (University of Pittsburgh); Nathalie Baracaldo, Yi Zhou, Ali Anwar, Heiko Ludwig (IBM Research - Almaden) |
|
11:20-12:00 | Session: Malware Detection |
Robust Detection of Obfuscated Strings in Android Apps
Authors: Alireza Mohammadinodooshan, Ulf Kargén, Nahid Shahmehri (Linköping University) |
|
Malware Detection on Highly Imbalanced Data through Sequence Modeling
Authors: Rajvardhan Oak, Min Du, David Yan (University of California Berkeley); Idan Amit (Palo Alto Networks) |
|
12:00–14:00 | Lunch |
14:00-15:00 | Session: Machine Learning for Security |
Risk Prioritization by Leveraging Latent Vulnerability Features in a Contested Environment
Authors: Kenneth Alperin, Allan Wollaber, Dennis Ross, Pierre Trepagnier (MIT Lincoln Laboratory); Leslie Leonard (U.S. Army Engineer Research and Development Center) |
|
Cross-Vendor Knowledge Transfer for Managed Security Services with Triplet Network
Authors: Toshiki Shibahara, Hirokazu Kodera, Daiki Chiba, Mitsuaki Akiyama, Kunio Hato (NTT Secure Platform Laboratories); Ola Söderström, Daniel Dalek (NTT Security); Masayuki Murata (Osaka University) |
|
CADENCE: Conditional Anomaly Detection for Events Using Noise-Contrastive Estimation
Authors: Mohammad Ruhul Amin (Stony Brook University); Pranav Garg, Baris Coskun (Amazon Web Services) |
|
15:00–15:45 | Coffee break |
15:45-16:45 | Session: Adversarial Machine Learning and Adversarial Examples |
Making Targeted Black-box Evasion Attacks Effective and Efficient
Authors: Mika Juuti, Buse Gul Atli, N. Asokan (Aalto University) |
|
Interpolated Adversarial Training: Achieving Robust Neural Networks Without Sacrificing Too Much Accuracy
Authors: Alex Lamb (Montreal Institute of Learning Algorithms); Vikas verma, Juho Kannala (Aalto University); Yoshua Bengio (Montreal Institute of Learning Algorithms) |
|
Analyzing the Robustness of Open-World Machine Learning
Authors: Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song (Princeton University); Chawin Sitawarin (University of California, Berkeley); Daniel Cullina (Princeton University); Mung Chiang (Princeton University/Purdue University); Prateek Mittal (Princeton Univ.) | |
16:45–17:00 | Closing remarks |
Recent years have seen a dramatic increase in applications of artificial intelligence, machine learning, and data mining to security and privacy problems. The use of AI and ML in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. The AISec workshop, now in its 12th year, is the leading venue for presenting and discussing new developments in the intersection of security and privacy with AI and machine learning.
Topics of interest include (but are not limited to):
AI/ML Security
AI/ML for Computer Security
We invite the following types of papers:
Paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX in the new ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.
Submission link: https://aisec2019.hotcrp.com.