2 nd ACM Workshop on
Large AI Systems and Models with Privacy and Security Analysis
October 13, 2025 — Taipei, Taiwan
co-located with the 32nd ACM Conference on Computer and Communications Security

Keynotes

Title: Computational Safety for Generative AI

Dr. Pin-Yu Chen, Principal Research Scientist, IBM Thomas J. Watson Research Center

Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on AI safety and robustness. His long-term research vision is to build trustworthy machine learning systems. He received the IJCAI Computers and Thought Award in 2023. He is a co-author of the book “Adversarial Robustness for Machine Learning”. At IBM Research, he received several research accomplishment awards, including IBM Master Inventor, IBM Corporate Technical Award, and IBM Pat Goldberg Memorial Best Paper. His research contributes to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI(’22,’23,’24), IJCAI’21, CVPR(’20,’21,’23), ECCV’20, ICASSP(’20,’22,’23,’24), KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He has been an IEEE Fellow since 2025. He is currently on the editorial board of Transactions on Machine Learning Research and IEEE Transactions on Signal Processing. He is also an Area Chair or Senior Program Committee member for NeurIPS, ICLR, ICML, AAAI, IJCAI, and PAKDD, and a Distinguished Lecturer of ACM. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award. In 2025, he received the IEEE SPS Industry Young Professional Leadership Award.

Large language models (LLMs) and Generative AI (GenAI) are at the forefront of frontier AI research and technology. With their rapidly increasing popularity and availability, challenges and concerns about their misuse and safety risks are becoming more prominent than ever. In this talk, we introduce a unified computational framework for evaluating and improving a wide range of safety challenges in generative AI. Specifically, we will show new tools and insights to explore and mitigate the safety and robustness risks associated with state-of-the-art LLMs and GenAI models, including (i) safety risks in fine-tuning LLMs, (ii) LLM red-teaming and jailbreak mitigation, (iii) prompt engineering for safety debugging, and (iv) robust detection of AI-generated content.

Call for Papers

Important Dates

  • Paper submission deadline: July 11, 2025, 11:59 PM (all deadlines are AoE, UTC-12)
  • Acceptance notification: August 15, 2025
  • Camera ready due: August 22, 2025
  • Workshop day: October 13, 2025

Overview

As Large AI Systems and Models (LAMs) become increasingly pivotal in a wide array of applications, their potential impact on the privacy and cybersecurity of critical infrastructure becomes a pressing concern. LAMPS is dedicated to addressing these unique challenges, fostering a dialogue on the latest advancements and ethical considerations in enhancing the privacy and cybersecurity of LAMs, particularly in the context of critical infrastructure protection.

LAMPS will bring together global experts to dissect the nuanced privacy and cybersecurity challenges posed by LAMs, especially in critical infrastructure sectors. This workshop will serve as a platform to unveil novel techniques, share best practices, and chart the course for future research, with a special emphasis on the delicate balance between advancing AI technologies and securing critical digital and physical systems.

Topics of Interest

Topics of interest include (but are not limited to):

Secure Large AI Systems and Models for Critical Infrastructure

  • AI-Enhanced Threat Intelligence and Detection
  • Automated Security Orchestration and Incident Response
  • Large AI Models in Vulnerability Assessment and Penetration Testing
  • AI-Driven Network Security Management
  • AI-Enabled Security Awareness and Education
  • Collaborative AI for Global Cyber Threat Intelligence Sharing
  • Regulatory Compliance and AI in Cybersecurity

Large AI Systems and Models' Privacy and Security Vulnerabilities

  • Advanced Threat Landscape
  • Holistic Security and Privacy Frameworks
  • Innovations in Privacy Preservation
  • Secure Computation in AI

Data Anonymization and Synthetic Data

  • Advancements in Data Protection
  • Cross-Border Data Flow and Cooperation
  • Intellectual Property Protection
  • Combatting Deepfakes

Human-Centric Large AI Systems and Models

  • User Vulnerability and Defense Mechanisms
  • Equity and Inclusivity in AI
  • Participative Large AI Governance
  • Enhancing Explainability and Trust
  • Designing for Security and Usability
  • Ethics and Decision-Making in AI
  • Frameworks for Responsible AI Governance

Submission Guidelines

Submitted papers must not substantially overlap with papers that have been published or simultaneously submitted to a journal or a conference with proceedings.

  • Short Papers: These papers should present concise and focused contributions, such as preliminary research findings, novel ideas with early evidence, or case studies relevant to the aforementioned topics of interest. Submissions must be up to 4 pages of body text in the ACM double-column format. Short papers must offer a clear and well-motivated contribution, even if the work is at an early stage, and should be of interest to the research community.
  • Research Papers: These papers should present new work, evidence, or ideas related to aforementioned topics of interest. Submission must be up to 8 pages of body text in the ACM double-column format, excluding well-marked references and appendices, and at most 10 pages. Research papers must be well-argued and worthy of publication and citation, on one of the topics listed above.
  • Systematization of knowledge (SoK) Papers: These papers should either consolidate and clarify ideas in a major research area within secure and trustworthy machine learning or provided compelling evidence to support or challenge long-held beliefs in such areas. Submission must be up to 8 pages of body text in the ACM double-column format, excluding well-marked references and appendices, and at most 10 pages. SoK papers must include "SoK:" at the beginning of their title.
  • Position Papers: These papers should cover broader issues and visions related to aforementioned topics of interest, including open challenges, technical perspectives, educational aspects, societal impact, or notable research results. Submissions must be very well-argued and consist of at most 4 pages of body text in the ACM double-column format, excluding well-marked references and appendices, and at most 5 pages in total. Position papers must include "Position:" at the beginning of their title.

Submission Site

Submission link: https://ccs25-lamps.hotcrp.com

Only PDF files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers must guarantee that one of the authors will register and present the paper at the workshop. Proceedings of the workshop will be available on a CD to the workshop attendees and will become part of the ACM Digital Library.

The archival papers will be included in the workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

Authors are responsible for obtaining appropriate publication clearances. Attendance and presentation by at least one author of each accepted paper at the workshop are mandatory for the paper to be included in the proceedings.

For any questions, please contact one of the PC co-chairs Maggie Liu: xiaoning.liu@rmit.edu.au.

Committee

PC Chairs

Web/Publication Chair

Organizing Committee

Program Committee

First Last Affiliation Countries and Regions
ArpitGargUniversity of AdelaideAU
BangWuRMITAU
CarstenMapleUniversity of WarwickGB
CobyWangVisa ResearchUS
GuanhongTaoUniversity of UtahUS
HyungjoonKooSungkyunkwan UniversityKR
JiamouSunCSIRO's Data61AU
JingXuCISPA Helmholtz Center for Information SecurityDE
KristenMooreCSIRO's Data61AU
LinyiLiSimon Fraser UniversityCA
MainackMondalIndian Institute of Technology KharagpurIN
MariusFleischerNVIDIAUS
MinghongFangUniversity of LouisvilleUS
MinxinDuThe Hong Kong Polytechnic UniversityHK
RyanSheatsleyUniversity of Wisconsin-MadisonUS
Shang-TseChenNational Taiwan UniversityTW
ShuangHaoUniversity of Texas at DallasUS
SMYiuThe University of Hong Kong, Hong KongHK
StjepanPicekRadboud UniversityNL
TaoNiCity University of Hong KongCN
TianDongShanghai Jiao Tong UniversityCN
TianshuoCongTsinghua UniversityCN
VeelashaMoonsamyRuhr University BochumDE
WanlunMaSwinburne University of TechnologyAU
YongsenZhengNanyang Technological UniversitySG
YuanyuanYuanETH ZurichCH
YufeiChenCity University of Hong KongCN
YuxinCaoNational University of SingaporeSG
ZhiyuanZhangMax Planck Institute for Security and PrivacyDE
ZitaoChenUniversity of British ColumbiaCA
ZiyaoLiuNanyang Technological UniversitySG