2 nd ACM Workshop on
Large AI Systems and Models with Privacy and Security Analysis
October 13, 2025 — Taipei, Taiwan
co-located with the 32nd ACM Conference on Computer and Communications Security

Keynotes

Title: Computational Safety for Generative AI

Dr. Pin-Yu Chen, Principal Research Scientist, IBM Thomas J. Watson Research Center

Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on AI safety and robustness. His long-term research vision is to build trustworthy machine learning systems. He received the IJCAI Computers and Thought Award in 2023. He is a co-author of the book “Adversarial Robustness for Machine Learning”. At IBM Research, he received several research accomplishment awards, including IBM Master Inventor, IBM Corporate Technical Award, and IBM Pat Goldberg Memorial Best Paper. His research contributes to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI(’22,’23,’24), IJCAI’21, CVPR(’20,’21,’23), ECCV’20, ICASSP(’20,’22,’23,’24), KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He has been an IEEE Fellow since 2025. He is currently on the editorial board of Transactions on Machine Learning Research and IEEE Transactions on Signal Processing. He is also an Area Chair or Senior Program Committee member for NeurIPS, ICLR, ICML, AAAI, IJCAI, and PAKDD, and a Distinguished Lecturer of ACM. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award. In 2025, he received the IEEE SPS Industry Young Professional Leadership Award.

Large language models (LLMs) and Generative AI (GenAI) are at the forefront of frontier AI research and technology. With their rapidly increasing popularity and availability, challenges and concerns about their misuse and safety risks are becoming more prominent than ever. In this talk, we introduce a unified computational framework for evaluating and improving a wide range of safety challenges in generative AI. Specifically, we will show new tools and insights to explore and mitigate the safety and robustness risks associated with state-of-the-art LLMs and GenAI models, including (i) safety risks in fine-tuning LLMs, (ii) LLM red-teaming and jailbreak mitigation, (iii) prompt engineering for safety debugging, and (iv) robust detection of AI-generated content.

Title: Statistics as a Compass for AI Security

Dr. Feng Liu, Senior Lecturer, University of Melbourne, Australia

Dr Feng Liu is a machine learning researcher with research interests in statistical trustworthy machine learning. Currently, he is the recipient of the ARC DECRA Fellowship, a Senior Lecturer (US Associate Professor) at The University of Melbourne, Australia, and a Visiting Scientist at RIKEN-AIP, Japan. He has served as an Area Chair for AISTATS, ICLR, ICML, NeurIPS, as a senior program committee (SPC) member for AAAI, IJCAI. He has received the Australasian AI Emerging Research Award from the Australian Computer Society, the Discovery Early Career Researcher Award from the Australian Research Council, the Outstanding Paper Award from NeurIPS 2022, the Best Paper Award from AAAI 2025 Workshop CoLoRAI, the Best Student Paper Award from FUZZ-IEEE 2019, and the Best Paper Runner-up Award from ECIS 2023.

As AI systems become pervasive, AI researchers and practitioners face new challenges ranging from adversarial attacks to data privacy leaks. This talk argues that many such AI risks can be more effectively addressed by adopting a statistical perspective. We first demonstrate how a two-sample statistical test, Maximum Mean Discrepancy (MMD) [1], can be adapted to detect adversarial examples by measuring subtle distributional disparities [2]. Building on this detection capability, we introduce a two-pronged defense approach that not only flags adversarial inputs but also purifies them, significantly improving model robustness without sacrificing accuracy [3].

In the latter part of the talk, we shift focus to data privacy, revealing how distributional analysis can uncover hidden usage of unauthorized training data in generative AI models. Even when direct memorization is removed via model distillation, the statistical "fingerprint" of the original dataset remains detectable—a finding that suggests membership inference attacks should evolve from single-instance checks to distribution-level scrutiny [4]. Through these case studies, this talk underscores the importance for the security research community to rethink AI safety from a statistical perspective, showing how rigorous distributional testing can both fortify models against attacks and expose subtle privacy risks.

[1] Learning Deep Kernels for Non-parametric Two Sample Test. ICML 2020.

[2] Maximum Mean Discrepancy is Aware of Adversarial Attacks. ICML 2021.

[3] One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy. ICML 2025.

[4] Membership Inference Attack Should Move On to Distributional Statistics for Distilled Generative Models. ICML 2025 Workshop on Reliable and Responsible Foundation Models.

Programme

The following times are on GMT+8.

09:20–9:30 Opening Remarks
9:30–10:15 Keynote Speech 1
Computational Safety for Generative AI
Dr. Pin-Yu Chen , Principal Research Scientist, IBM Thomas J. Watson Research Center
10:15–10:30 Introduction to ACM Transactions on AI Security and Privacy
10:30-11:00 Morning Tea Break
11:00-11:30 Session I: Adversarial Attacks and Robustness
LLM Safeguard is a Double-Edged Sword: Exploiting False Positives for Denial-of-Service Attacks
Authors : Qingzhao Zhang, Ziyang Xiong, and Morley Mao (University of Michigan)
Exploring the Robustness of Vision-Language-Action Models against Sensor Attacks
Authors : Xuancun Lu, Jiaxiang Chen, Shilin Xiao, Zizhi Jin (Zhejiang University), Ruochen Zhou (Hong Kong University of Science and Technology), Xiaoyu Ji, and Wenyuan Xu (Zhejiang University)
11:30–12:00 Session II: Large Vision Model Security
When Vision Fails: Text Attacks Against ViT and OCR
Authors : Nicholas Boucher, Jenny Blessing, Ilia Shumailov (University of Cambridge), Ross Anderson (University of Cambridge and University of Edinburgh), and Nicolas Papernot (University of Toronto)
Safety Assessment of 3D Generation Models in AR/VR Applications
Authors : Xi Tang, Wanlun Ma, Yinwei Bao (Swinburne University of Technology), Minhui Xue (CSIRO's Data61), Sheng Wen (Swinburne University of Technology), Yang Xiang (Digital Capability Research Platform, Swinburne University of Technology)
12:00–14:15 Lunch
14:15–15:00 Keynote Speech 2
Statistics as a Compass for AI Security
Dr. Feng Liu , Senior Lecturer, University of Melbourne
15:00–15:30 Afternoon Tea Break
15:30–16:00 Session III: Secure Graph Learning and Application
SPG: Ensuring Structural Privacy in Secure Graph Learning
Authors : Yiming Qin (Monash University), Shangqi Lai (CSIRO's Data61), Joseph Liu (Monash University), Cong Wang (City University Hong Kong), and Xingliang Yuan (The University of Melbourne)
VAlign-GLAR: Graph Retrieval-Based Vulnerability Intelligence Alignment via Structured LLM-Guided Inference
Authors : Lihua Wang, Jiaojiao Jiang, Salil S. Kanhere (University of New South Wales), Jiamou Sun, Zhenchang Xing (CSIRO's Data61), and Sanjay Jha (University of New South Wales)
16:00–16:25 Session IV: Cybersecurity Threat Intelligence
ThreatCompass: A Tool for Identifying and Mapping Security Issues to TTPs
Authors : Stefano Simonetto, Yannick Krijnen, Ronan Oostveen, Peter Bosch, and Willem Jonker (University of Twente)
On Using LLMs for Vulnerability Classification
Authors : Rustam Talibzade, Idilio Drago, and Francesco Bergadano (University of Turin)
16:25–16:30 Concluding Remarks

Call for Papers

Important Dates

  • Paper submission deadline: July 11, 2025, 11:59 PM (all deadlines are AoE, UTC-12)
  • Acceptance notification: August 15, 2025
  • Camera ready due: August 22, 2025
  • Workshop day: October 13, 2025

Overview

As Large AI Systems and Models (LAMs) become increasingly pivotal in a wide array of applications, their potential impact on the privacy and cybersecurity of critical infrastructure becomes a pressing concern. LAMPS is dedicated to addressing these unique challenges, fostering a dialogue on the latest advancements and ethical considerations in enhancing the privacy and cybersecurity of LAMs, particularly in the context of critical infrastructure protection.

LAMPS will bring together global experts to dissect the nuanced privacy and cybersecurity challenges posed by LAMs, especially in critical infrastructure sectors. This workshop will serve as a platform to unveil novel techniques, share best practices, and chart the course for future research, with a special emphasis on the delicate balance between advancing AI technologies and securing critical digital and physical systems.

Topics of Interest

Topics of interest include (but are not limited to):

Secure Large AI Systems and Models for Critical Infrastructure

  • AI-Enhanced Threat Intelligence and Detection
  • Automated Security Orchestration and Incident Response
  • Large AI Models in Vulnerability Assessment and Penetration Testing
  • AI-Driven Network Security Management
  • AI-Enabled Security Awareness and Education
  • Collaborative AI for Global Cyber Threat Intelligence Sharing
  • Regulatory Compliance and AI in Cybersecurity

Large AI Systems and Models' Privacy and Security Vulnerabilities

  • Advanced Threat Landscape
  • Holistic Security and Privacy Frameworks
  • Innovations in Privacy Preservation
  • Secure Computation in AI

Data Anonymization and Synthetic Data

  • Advancements in Data Protection
  • Cross-Border Data Flow and Cooperation
  • Intellectual Property Protection
  • Combatting Deepfakes

Human-Centric Large AI Systems and Models

  • User Vulnerability and Defense Mechanisms
  • Equity and Inclusivity in AI
  • Participative Large AI Governance
  • Enhancing Explainability and Trust
  • Designing for Security and Usability
  • Ethics and Decision-Making in AI
  • Frameworks for Responsible AI Governance

Submission Guidelines

Submitted papers must not substantially overlap with papers that have been published or simultaneously submitted to a journal or a conference with proceedings.

  • Short Papers: These papers should present concise and focused contributions, such as preliminary research findings, novel ideas with early evidence, or case studies relevant to the aforementioned topics of interest. Submissions must be up to 4 pages of body text in the ACM double-column format. Short papers must offer a clear and well-motivated contribution, even if the work is at an early stage, and should be of interest to the research community.
  • Research Papers: These papers should present new work, evidence, or ideas related to aforementioned topics of interest. Submission must be up to 8 pages of body text in the ACM double-column format, excluding well-marked references and appendices, and at most 10 pages. Research papers must be well-argued and worthy of publication and citation, on one of the topics listed above.
  • Systematization of knowledge (SoK) Papers: These papers should either consolidate and clarify ideas in a major research area within secure and trustworthy machine learning or provided compelling evidence to support or challenge long-held beliefs in such areas. Submission must be up to 8 pages of body text in the ACM double-column format, excluding well-marked references and appendices, and at most 10 pages. SoK papers must include "SoK:" at the beginning of their title.
  • Position Papers: These papers should cover broader issues and visions related to aforementioned topics of interest, including open challenges, technical perspectives, educational aspects, societal impact, or notable research results. Submissions must be very well-argued and consist of at most 4 pages of body text in the ACM double-column format, excluding well-marked references and appendices, and at most 5 pages in total. Position papers must include "Position:" at the beginning of their title.

Submission Site

Submission link: https://ccs25-lamps.hotcrp.com

Only PDF files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers must guarantee that one of the authors will register and present the paper at the workshop. Proceedings of the workshop will be available on a CD to the workshop attendees and will become part of the ACM Digital Library.

The archival papers will be included in the workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

Authors are responsible for obtaining appropriate publication clearances. Attendance and presentation by at least one author of each accepted paper at the workshop are mandatory for the paper to be included in the proceedings.

For any questions, please contact one of the PC co-chairs Maggie Liu: xiaoning.liu@rmit.edu.au.

Committee

PC Chairs

Web/Publication Chair

Organizing Committee

Program Committee

First Last Affiliation Countries and Regions
ArpitGargUniversity of AdelaideAU
BangWuRMITAU
CarstenMapleUniversity of WarwickGB
CobyWangVisa ResearchUS
GuanhongTaoUniversity of UtahUS
HeZhangRMIT UniversityAU
HyungjoonKooSungkyunkwan UniversityKR
JiamouSunCSIRO's Data61AU
JingXuCISPA Helmholtz Center for Information SecurityDE
KristenMooreCSIRO's Data61AU
LinyiLiSimon Fraser UniversityCA
MainackMondalIndian Institute of Technology KharagpurIN
MariusFleischerNVIDIAUS
MinghongFangUniversity of LouisvilleUS
MinxinDuThe Hong Kong Polytechnic UniversityHK
RenyangLiuNational University of SingaporeSG
RyanSheatsleyUniversity of Wisconsin-MadisonUS
Shang-TseChenNational Taiwan UniversityTW
ShuangHaoUniversity of Texas at DallasUS
SMYiuThe University of Hong Kong, Hong KongHK
StjepanPicekRadboud UniversityNL
TaoNiCity University of Hong KongCN
TianDongShanghai Jiao Tong UniversityCN
TianshuoCongTsinghua UniversityCN
VeelashaMoonsamyRuhr University BochumDE
WanlunMaSwinburne University of TechnologyAU
YongsenZhengNanyang Technological UniversitySG
YuanyuanYuanETH ZurichCH
YufeiChenCity University of Hong KongCN
YuxinCaoNational University of SingaporeSG
ZhiyuanZhangMax Planck Institute for Security and PrivacyDE
ZihanWangUniversity of QueenslandAU
ZitaoChenUniversity of British ColumbiaCA
ZiyaoLiuNanyang Technological UniversitySG