18 th ACM Workshop on
Artificial Intelligence and Security
October 17th, 2025 - Taipei, Taiwan
co-located with the 32nd ACM Conference on Computer and Communications Security
Photo: Originally posted to Flickr Taipei Skyline 2015 (License: CC BY 2.0 )

Keynotes

Title: Achieving a Secure AI Agent Ecosystem

Nicole Nichols, Distinguished Engineer @ Palo Alto Networks

Nicole Nichols is a Distinguished Engineer in Machine Learning Security at Palo Alto Networks. She previously held senior roles at Apple, Microsoft and has contributed to both academia and industry advancements in adversarial machine learning and security. She has published at numerous ACM, IEEE, and CVPR workshops, and was co-chair of ICML-ML4Cyber workshop. She has a PhD in Electrical Engineering from the University of Washington.

AI agent systems, capable of complex planning and autonomous action in real world environments, present profound and novel cybersecurity challenges. Current cybersecurity paradigms are too brittle to address the unique vulnerabilities stemming from dynamic generative agents with opaque interpretability, new protocols connecting tools and data, and the unpredictable dynamics of multi agent interactions. Prior work has identified a range of security gaps in AI agents. However, it is essential to move beyond reiterating concerns and toward a collaborative, action oriented agenda to mitigate these risks. An international group of leading industrial and academic researchers were gathered by Schmidt Sciences, RAND, and Palo Alto Networks, to contextualize the fragmented cross domain expertise and insights needed to produce solutions that reflect the full landscape interconnected challenges that uniquely arise in the setting of LLM driven AI Agents. This report distills the collective insights from this gathering and contributes: 1) A flexible definition of the functional properties of AI agents, 2) A description of how these AI agent properties create novel implications for security, and 3) An open roadmap to producing interconnected comprehensive solutions.

Title: Context Rules! Privacy and Security for Future Trustworthy AI Agents

Eugene Bagdasarian, Assistant Professor @ University of Massachusetts Amherst, Researcher @ Google

Eugene Bagdasarian is an Assistant Professor at University of Massachusetts Amherst and a Researcher at Google. His work focuses on studying attack vectors in AI systems deployed in real life and proposing new designs that mitigate these attacks. Previously, he received the Distinguished Paper Award at USENIX Security and Apple AI/ML PhD Fellowship.


New AI agents integrate with complex systems and users’ data, thus opening new attack vectors. Worse, security designs struggle with the versatility of agents: booking a trip requires different controls than responding to an email. In this talk, I propose to ground agentic privacy and security in the theory of Contextual Integrity, which defines privacy as appropriate information flows under contextual norms. We use language models to infer the current trusted context and synthesize restrictions on tools and data, then develop a policy engine to deterministically enforce them, helping to isolate attacks that abuse agentic capabilities and data access. While promising, this design raises new questions: from establishing trusted context and improving policy generation to collecting social norms and resolving context ambiguity.

Title: What Unlearning Tells Us About Machine Learning

Katherine Lee, Researcher @ OpenAI

Katherine is a researcher at OpenAI. Her work has provided essential empirical evidence and measurement for grounding discussions around concerns that language models infringe copyright, and about how language models can respect an individuals’ right to privacy and control of their data. Additionally, she has developed large language models (T5), developed methods of reducing memorization, and studied the impact of data curation on model development. Her work has been highly awarded at venues like: NeurIPS, ICML, ICLR, and USENIX.

Abstract To Be Announced

Programme

The following times are in TST (Taiwan Standard Time) UTC/GMT +8 hours.

09:00–9:15 Opening and Welcome
9:15–10:00 Keynote 1
Achieving a Secure AI Agent Ecosystem
Nicole Nichols , Distinguished Engineer @ Palo Alto Networks
10:00-10:30 Spotlights
I Know Which LLM Wrote Your Code Last Summer: LLM generated Code Stylometry for Authorship Attribution
Authors : Tamas Bisztray (University of Oslo), Bilel Cherif (Technology Innovation Institute), Richard A. Dubniczky (Eötvös Lóránd University), Nils Gruschka (University of Oslo), Bertalan Borsos (Eötvös Lóránd University), Mohamed Amine Ferrag (University of Guelma), Attila Kovacs (Eötvös Lóránd University), Vasileios Mavroeidis (University of Oslo), Norbert Tihanyi (Technology Innovation Institute)
E-PhishGEN: Unlocking Novel Research in Phishing Email Detection
Authors : Luca Pajola (Spritzmatter & University of Padua), Eugenio Caripoti (University of Padua), Stefan Banzer (University of Liechtenstein), Simeone Pizzi (Spritzmatter), Mauro Conti (University of Padua & Örebro University), Giovanni Apruzzese (University of Liechtenstein)
Defending Against Prompt Injection With a Few Defensive Tokens
Authors : Sizhe Chen (UC Berkeley), Yizhu Wang (UC Berkeley), Nicholas Carlini (Anthropic), Chawin Sitawarin (Google), David Wagner (UC Berkeley)
Enhancing Robustness in Post-Processing Watermarking: An Ensemble Attack Network Using CNNs and Transformers
Authors : Tzuhsuan Huang (Academia Sinica), Cheng Yu Yeo (National Yang Ming Chiao Tung University), Tsai-Ling Huang (National Yang Ming Chiao Tung University), Hong-Han Shuai (National Yang Ming Chiao Tung University), Wen-Huang Cheng (National Taiwan University), Jun-Cheng Chen (Academia Sinica)
10:30–11:00 Coffee break
11:00–12:00 Poster session 1
12:00–13:30 Lunch
13:30–14:15 Keynote 2
What Unlearning Tells Us About Machine Learning
Katherine Lee , Researcher @ OpenAI
14:15–15:00 Keynote 3
Context Rules! Privacy and Security for Future Trustworthy AI Agents
Eugene Bagdasarian , Assistant Professor @ University of Massachusetts Amherst, Researcher @ Google
15:00–15:30 Break
15:30–16:30 Poster session 2
16:30–16:45 Closing remarks

Accepted Papers

Session 1: AI/ML Security & Attacks

Enhancing Prompt Injection Attacks to LLMs via Poisoning Alignment
Authors : Zedian Shao (Duke University), Hongbin Liu (Duke University), Jaden Mu (East Chapel Hill High School), Neil Gong (Duke University)
Rethinking How to Evaluate Language Model Jailbreak
Authors : Hongyu Cai (Purdue University), Arjun Arunasalam (Purdue University), Leo Y. Lin (Purdue University), Antonio Bianchi (Purdue University), Z. Berkay Celik (Purdue University)
How Not to Detect Prompt Injections with an LLM
Authors : Sarthak Choudhary (University of Wisconsin-Madison), Divyam Anshumaan (University of Wisconsin-Madison), Nils Palumbo (University of Wisconsin-Madison), Somesh Jha (University of Wisconsin-Madison)
Defending Against Prompt Injection With a Few DefensiveTokens
Authors : Sizhe Chen (UC Berkeley), Yizhu Wang (UC Berkeley), Nicholas Carlini (Anthropic), Chawin Sitawarin (Google), David Wagner (UC Berkeley)
JailbreaksOverTime: Detecting Jailbreak Attacks Under Distribution Shift
Authors : Julien Piet (University of California, Berkeley), Xiao Huang (University of California, Berkeley), Dennis Jacob (University of California, Berkeley), Annabella Chow (University of California, Berkeley), Maha Alrashed (KACST), Geng Zhao (University of California, Berkeley), Zhanhao Hu (University of California, Berkeley), Chawin Sitawarin (University of California, Berkeley), Basel Alomair (KACST), David Wagner (University of California, Berkeley)
CyberLLMInstruct: A Pseudo-Malicious Dataset Revealing Safety-Performance Trade-offs in Cyber Security LLM Fine-tuning
Authors : Adel ElZemity (University of Kent), Budi Arief (University of Kent), Shujun Li (University of Kent)
LLM-CVX: A Benchmarking Framework for Assessing the Offensive Potential of LLMs in Exploiting CVEs
Authors : Mohamed Amine El yagouby (Université de Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France and Université Internationale de Rabat, TICLab, 11103 Rabat, Morocco), Abdelkader Lahmadi (Université de Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France), Mehdi Zakroum (Université Internationale de Rabat, TICLab, 11103 Rabat, Morocco), Olivier Festor (Université de Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France), Mounir Ghogho (Université Internationale de Rabat, TICLab, 11103 Rabat, Morocco)
The Hidden Threat in Plain Text: Attacking RAG Data Loaders
Authors : Alberto Castagnaro (University of Padua), Umberto Salviati (University of Padua), Mauro Conti (University of Padua & Örebro University), Luca Pajola (University of Padua), Simeone Pizzi (Università degli Studi di Padova)
Black-Box Universal Adversarial Attack on Automatic Speech Recognition Systems for Maritime Radio Communication Using Evolutionary Strategies
Authors : Aliza Katharina Reif (German Aerospace Center (DLR), Institute for AI Safety and Security), Lorenzo Bonasera (German Aerospace Center (DLR), Institute for AI Safety and Security), Stjepan Picek (Radboud University), Oscar Hernán Ramírez-Agudelo (German Aerospace Center (DLR), Institute for AI Safety and Security), Michael Karl (German Aerospace Center (DLR), Institute for AI Safety and Security)
Seeing is Believing: Interpreting Behavioral Changes in Audio Deepfake Detectors Arising from Data Augmentation
Authors : Boo Fullwood (Georgia Institute of Technology), Fabian Monrose (Georgia Institute of Technology)
E-PhishGEN: Unlocking Novel Research in Phishing Email Detection
Authors : Luca Pajola (Spritzmatter & University of Padua), Eugenio Caripoti (University of Padua), Stefan Banzer (University of Liechtenstein), Simeone Pizzi (Spritzmatter), Mauro Conti (University of Padua & Örebro University), Giovanni Apruzzese (University of Liechtenstein & Reykjavik University)
EthAegis: Featured graph based Fraud Detection in Ethereum Transactions
Authors : Ankur Jain (Indian Institute of Technology Patna), Somanath Tripathy (Indian Institute of Technology Patna)
That's not you! Applying Neural Networks to Risk-Based Authentication to Detect Suspicious Logins
Authors : Daniel Rotter (Leibniz University Hannover), Tim Dörrie Schwabe (Leibniz University Hannover), Markus Duermuth (Leibniz University Hannover)
Enhancing Robustness in Post-Processing Watermarking: An Ensemble Attack Network Using CNNs and Transformers
Authors : Tzuhsuan Huang (Academia Sinica), Cheng Yu Yeo (National Yang Ming Chiao Tung University), Tsai-Ling Huang (National Yang Ming Chiao Tung University), Hong-Han Shuai (National Yang Ming Chiao Tung University), Wen-Huang Cheng (National Taiwan University), Jun-Cheng Chen (Academia Sinica)

Session 2: Privacy, Unlearning, and Incident Reporting

GhosTEE: An Approach to Solving the GPU-Privacy Trade-off for Machine Learning Inference
Authors : Andrei-Cosmin Aprodu (Fraunhofer AISEC), Hendrik Meyer zum Felde (Fraunhofer AISEC), Daniel Kowatsch (Fraunhofer AISEC), Konstantin Böttinger (Fraunhofer AISEC)
I Know Which LLM Wrote Your Code Last Summer: LLM generated Code Stylometry for Authorship Attribution
Authors : Tamas Bisztray (University of Oslo), Bilel Cherif (Technology Innovation Institute), Richard A. Dubniczky (Eötvös Lóránd University), Nils Gruschka (University of Oslo), Bertalan Borsos (Eötvös Lóránd University), Mohamed Amine Ferrag (University of Guelma), Attila Kovacs (Eötvös Lóránd University), Vasileios Mavroeidis (University of Oslo), Norbert Tihanyi (Technology Innovation Institute)
Online Incident Response Planning under Model Misspecification through Bayesian Learning and Belief Quantization
Authors : Kim Hammar (KTH Royal Institute of Technology), Tao Li (City University of Hong Kong)
AI-related Vulnerabilities within CVEs: Are We Ready Yet? A Study of Vulnerability Disclosure in AI Products
Authors : Marcello Maugeri (University of Catania), Gianpietro Castiglione (University of Catania), Mario Raciti (IMT School for Advanced Studies Lucca), Giampaolo Bella (University of Catania)
Federated Unlearning using Tree-based Sharding
Authors : Christian Troiani (Institute of Cryptography and Cybersecurity, University of Wollongong), Willy Susilo (Institute of Cryptography and Cybersecurity, University of Wollongong), Yang-Wai Chow (Institute of Cryptography and Cybersecurity, University of Wollongong), Yannan Li (Institute of Cryptography and Cybersecurity, University of Wollongong)
Oops!... They Stole it Again: Attacks on Split Learning
Authors : Tanveer Khan (Tampere University), Antonis Michalas (Tampere University)
LLM Unlearning on Noisy Forget Sets: A Study of Incomplete, Rewritten, and Watermarked Data
Authors : Changsheng Wang (Michigan State University), Yihua Zhang (Michigan State University), Dennis Wei (IBM Research), Jinghan Jia (Michigan State University), Pin-Yu Chen (IBM Research), Sijia Liu (Michigan State University)
Improving Unlearning with Model Updates Probably Aligned with Gradients
Authors : Virgile Dine (Inria), Teddy Furon (Inria), Charly Faure (AMIAD)
Ensembling Membership Inference Attacks Against Tabular Generative Models
Authors : Joshua Ward (University of California Los Angeles), Christy Yang (Stanford University), Chi-Hua Wang (University of California Los Angeles), Guang Cheng (University of California Los Angeles)
DP-Morph: Improving the Privacy-Utility-Performance Trade-off for Differentially Private OCT Segmentation
Authors : Shiva Parsarad (University of Basel), Ehsan Yousefzadeh-Asl-Miandoab (IT University of Copenhagen), Raheleh Kafieh (Durham University), Pinar Tozun (IT University of Copenhagen), Florina Ciorba (University of Basel), Isabel Wagner (University of Basel)

Call for Papers

Important Dates

  • Paper submission deadline: June 20th, 2025 July 3rd, 2025 (all deadlines are AoE, UTC-12)
  • Reviews due: July 20th, 2025 August 3rd, 2025
  • Review Released and Acceptance notification: August 10th, 2025
  • Camera ready due: August 22nd, 2025
  • Workshop day: October 17th, 2025

Overview

Recent years have seen a dramatic increase in applications of Artificial Intelligence (AI), Machine Learning (ML), and data mining to security and privacy problems. The analytic tools and intelligent behavior provided by these techniques make AI and ML increasingly important for autonomous real-time analysis and decision making in domains with a wealth of data or that require quick reactions to constantly changing situations. The use of learning methods in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of “deep learning” techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. In addition, data mining and machine learning techniques create a wealth of privacy issues, due to the abundance and accessibility of data. The AISec workshop provides a venue for presenting and discussing new developments in the intersection of security and privacy with AI and ML.

Topics of Interest

Topics of interest include (but are not limited to):

Theoretical topics related to security

  • Adversarial learning
  • Security of deep learning systems
  • Robust statistics
  • Learning in games
  • Economics of security
  • Differential privacy

Security applications

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification and analysis
  • Data anonymization/de-anonymization
  • Security in social networks
  • Big data analytics for security
  • User authentication
  • Securing AI agents

Security-related AI problems

  • Distributed inference and decision making for security
  • Secure multiparty computation and cryptographic approaches
  • Model confidentiality
  • Privacy-preserving data mining
  • Adaptive side-channel attacks
  • Design and analysis of CAPTCHAs
  • AI approaches to trust and reputation
  • Vulnerability testing through intelligent probing (e.g. fuzzing)
  • Content-driven security policy management & access control
  • Anomalous behavior detection (e.g. for the purpose of fraud detection)
  • AI Misuse (e.g., Large Language Models for automated hacking, misinformation, deepfakes)
  • Safety and ethical issues of Generative AI

Submission Guidelines

We invite the following types of papers:

  • Original research papers on any topic in the intersection of AI or machine learning with security, privacy, or related areas.
  • Position and open-problem papers discussing the relationship of AI or machine learning to security or privacy. Submitted papers of this type may not substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop proceedings.
  • Systematization-of-knowledge papers , which should distill the AI or machine learning contributions of a previously-published series of security papers.

Papers not following the following guidelines will be desk-rejected. Submissions must be in English and properly anonymized. The papers should be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and striclty with the ACM format. This format is also required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template . The authors can specify the paper type in the submission form. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them.

Submission Site

Submission link: https://aisec25.hotcrp.com .

All accepted submissions will be presented at the workshop as posters. Accepted papers will be selected for presentation as spotlights based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance and will be included in the ACM workshop proceedings.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Important notice: Please note that traveling to Taiwan may require a visa. Depending on the participants' nationalities, the visa application process may need to be initiated early to avoid last-minute travel disruptions. Please, check the CCS instructions for visa at https://www.sigsac.org/ccs/CCS2025/visa/ .

For any questions, please contact one the workshop organizers at [email protected]

Committee

Workshop Chairs

Steering Committee

Program Committee

Top Reviewers
  • Adam Jones (Imperial College London)
  • Alessandro Erba (Karlsruhe Institute of Technology)
  • Amit Kumar Sikder (Iowa State University)
  • Andrew Cullen (University of Melbourne)
  • Arul Thileeban Sagayam (Bloomberg)
  • Balachandra SHanabhag (Cerebras Systems)
  • Bhavna Soman (Amazon Web Services)
  • Christian Wressnegger (Karlsruhe Institute of Technology)
  • Daniel Arp (TU Wien)
  • Daniel Gibert Llaurado (Artificial Intelligence Research Institute)
  • David Pape (CISPA Helmholtz Center for Information Security)
  • Erik Imgrund (BIFOLD & TU Berlin)
  • Fabio Brau (University of Cagliari)
  • Fabio De Gaspari (Sapienza University of Rome)
  • Gianluca De Stefano (CISPA)
  • Giorgio Piras (University of Cagliari)
  • Giorgio Severi (Microsoft)
  • Giovanni Apruzzese (University of Liechtenstein & Reykjavik University)
  • Giulio Rossolini (Scuola Superiore Sant'Anna)
  • Giuseppina Andresini (University of Bari Aldo Moro Italy)
  • Gustavo Sánchez (Karlsruhe Institute of Technology)
  • Hari Venugopalan (University of California, Davis)
  • Henry Kvinge (Pacific Northwest National Laboratory)
  • Ilias Tsingenopoulos (KU Leuven)
  • John Holodnak (MIT Lincoln Laboratory)
  • Jonas Ricker (Ruhr University Bochum)
  • Jonathan Evertz (CISPA Helmholtz Center for Information Security)
  • Jose Maria de Fuentes (Universidad Carlos III de Madrid)
  • Kathrin Grosse (IBM Research Zurich)
  • Kunyang Li (University of Wisconsin-Madison)
  • Lea Schönherr (CISPA Helmholtz Center for Information Security)
  • Lorenzo Cazzaro (Università Ca' Foscari Venezia)
  • Luca Demetrio (University of Genoa)
  • Luke Richards (UMBC)
  • Maria-Irina Nicolae (Bosch Research)
  • Marius Fleischer (NVIDIA)
  • Markus Duermuth (Leibniz University Hannover)
  • Mauro Conti (University of Padua & Örebro University)
  • Maximilian Noppel (Karlsruhe Institute of Technology)
  • Nicole Nichols (Palo Alto Networks)
  • Nik Kale (Cisco Systems)
  • Pablo Moriano (Oak Ridge National Laboratory)
  • Patrick Dwyer (Apple, Inc)
  • Pavel Laskov (University of Liechtenstein)
  • Salijona Dyrmishi (University of Luxembourg)
  • Saskia Laura Schröer (University of Liechtenstein)
  • Scott Coull (Google)
  • Shae McFadden (King's College London & The Alan Turing Institute & University College London)
  • Shaoor Munir (University of California, Davis)
  • Sina Mavali (CISPA Helmholtz Center for Information Security)
  • Stefano Traverso (Ermes Cyber Security S.p.A.)
  • Teddy Furon (Inria)
  • Thorsten Eisenhofer (BIFOLD & TU Berlin)
  • Yixin Wu (CISPA Helmholtz Center for Information Security)
  • Zied Ben Houidi (Huawei Technologies Co. Ltd.)
  • Ziqi Yang (Zhejiang University)
Reviewers
  • Ace (Achin) Kulshrestha (Google Inc.)
  • Aideen Fay (Microsoft)
  • Alessandro Brighente (University of Padova)
  • Angelo Sotgiu (University of Cagliari)
  • Annalisa Appice (Università di Bari Aldo Moro)
  • Anne Josiane Kouam (Inria & TU Berlin)
  • Antonio Emanuele Cinà (University of Genoa)
  • Blaine Hoak (University of Wisconsin-Madisn)
  • Boyang Zhang (CISPA Helmholtz Center for Information Security)
  • Chao Zhang (Tsinghua University)
  • Chawin Sitawarin (Google DeepMind)
  • Chen Gong (University of Virginia)
  • Christian Scano (Università degli Studi di Cagliari)
  • Clinton Cao (Delft University of Technology)
  • Daniele Canavese (CNR-IMATI)
  • Davy Preuveneers (KU Leuven)
  • Dorjan Hitaj (Sapienza University of Rome)
  • Edward Raff (Booz Allen Hamilton)
  • George Theodorakopoulos (Cardiff University)
  • Giacomo Quadrio (University of Padova)
  • Giulio Zizzo (IBM Research)
  • Hamid Bostani (University of Luxembourg)
  • Han Wang (University of Illinois Urbana-Champaign)
  • Harshal Bhadreshkumar Shah (Visa Research)
  • James Hu (University of Arizona)
  • Julien Piet (UC Berkeley)
  • Junhao Dong (Nanyang Technological University)
  • Kartik Patwari (University of California, Davis)
  • Klim Kireev (MPI-SP)
  • Konrad Rieck (BIFOLD & TU Berlin)
  • Kun Sun (George Mason University)
  • Lorenzo Cavallaro (University College London)
  • Maria Rigaki (Czech Technical University in Prague)
  • Matous Kozak (Czech Technical University in Prague)
  • Milenko Drinic (Microsoft Corporation)
  • Min Chen (Vrije Universiteit Amsterdam)
  • Muhammad Haroon (University of California, Davis)
  • Muhammad Zaid Hameed (IBM Research Europe, Ireland)
  • Ozan Özdenizci (Graz University of Technology)
  • Peizhuo Lv (Nanyang Technological University)
  • Pratyusa Manadhata (Meta)
  • Qingying Hao (University of Illinois at Urbana Champaign)
  • Quan Le (University College Dublin)
  • Riccardo Lazzeretti (Sapienza University of Rome)
  • Ridwanul Hasan Tanvir (Pennsylvania State University)
  • Robin Hutmacher (Robert Bosch GmbH)
  • Rui Fernandes (2AI IPCA)
  • Sachin Shukla (Microsoft)
  • Shu Wang (Palo Alto Networks, Inc.)
  • Shuai Wang (Hong Kong University of Science and Technology)
  • Simon Oya (The University of British Columbia)
  • Simos Gerasimou (University of York, UK)
  • Thomas Humphries (University of Waterloo)
  • Tianhao Wang (University of Virginia)
  • Venkata Gopi Kolla (Salesforce Inc, CA and University of Louisiana, LA)
  • Vera Rimmer (DistriNet, KU Leuven)
  • Wenjun Zhu (Zhejiang University)
  • Xiaoyu Ji (Zhejiang University)
  • Xin Fan Guo (King's College London)
  • Xinran Zheng (University College London)
  • Xinyue Shen (CISPA Helmholtz Center for Information Security)
  • Xuan Chen (Purdue University)
  • Yan Pang (University of Virginia)
  • Yiling He (University College London)
  • Yu Bi (University of Rhode Island)
  • Yue Zhao (Chinese Academy of Sciences)
  • Yuyi Li (University of California, Davis)
  • Zhixiong Zhuang (Robert Bosch)

Thanks for those who contacted us to help with the reviews!