Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on AI safety and robustness. His long-term research vision is to build trustworthy machine learning systems. He received the IJCAI Computers and Thought Award in 2023. He is a co-author of the book “Adversarial Robustness for Machine Learning”. At IBM Research, he received several research accomplishment awards, including IBM Master Inventor, IBM Corporate Technical Award, and IBM Pat Goldberg Memorial Best Paper. His research contributes to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI(’22,’23,’24), IJCAI’21, CVPR(’20,’21,’23), ECCV’20, ICASSP(’20,’22,’23,’24), KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He has been an IEEE Fellow since 2025. He is currently on the editorial board of Transactions on Machine Learning Research and IEEE Transactions on Signal Processing. He is also an Area Chair or Senior Program Committee member for NeurIPS, ICLR, ICML, AAAI, IJCAI, and PAKDD, and a Distinguished Lecturer of ACM. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award. In 2025, he received the IEEE SPS Industry Young Professional Leadership Award.
Large language models (LLMs) and Generative AI (GenAI) are at the forefront of frontier AI research and technology. With their rapidly increasing popularity and availability, challenges and concerns about their misuse and safety risks are becoming more prominent than ever. In this talk, we introduce a unified computational framework for evaluating and improving a wide range of safety challenges in generative AI. Specifically, we will show new tools and insights to explore and mitigate the safety and robustness risks associated with state-of-the-art LLMs and GenAI models, including (i) safety risks in fine-tuning LLMs, (ii) LLM red-teaming and jailbreak mitigation, (iii) prompt engineering for safety debugging, and (iv) robust detection of AI-generated content.
Dr Feng Liu is a machine learning researcher with research interests in statistical trustworthy machine learning. Currently, he is the recipient of the ARC DECRA Fellowship, a Senior Lecturer (US Associate Professor) at The University of Melbourne, Australia, and a Visiting Scientist at RIKEN-AIP, Japan. He has served as an Area Chair for AISTATS, ICLR, ICML, NeurIPS, as a senior program committee (SPC) member for AAAI, IJCAI. He has received the Australasian AI Emerging Research Award from the Australian Computer Society, the Discovery Early Career Researcher Award from the Australian Research Council, the Outstanding Paper Award from NeurIPS 2022, the Best Paper Award from AAAI 2025 Workshop CoLoRAI, the Best Student Paper Award from FUZZ-IEEE 2019, and the Best Paper Runner-up Award from ECIS 2023.
As AI systems become pervasive, AI researchers and practitioners face new challenges ranging from adversarial attacks to data privacy leaks. This talk argues that many such AI risks can be more effectively addressed by adopting a statistical perspective. We first demonstrate how a two-sample statistical test, Maximum Mean Discrepancy (MMD) [1], can be adapted to detect adversarial examples by measuring subtle distributional disparities [2]. Building on this detection capability, we introduce a two-pronged defense approach that not only flags adversarial inputs but also purifies them, significantly improving model robustness without sacrificing accuracy [3].
In the latter part of the talk, we shift focus to data privacy, revealing how distributional analysis can uncover hidden usage of unauthorized training data in generative AI models. Even when direct memorization is removed via model distillation, the statistical "fingerprint" of the original dataset remains detectable—a finding that suggests membership inference attacks should evolve from single-instance checks to distribution-level scrutiny [4]. Through these case studies, this talk underscores the importance for the security research community to rethink AI safety from a statistical perspective, showing how rigorous distributional testing can both fortify models against attacks and expose subtle privacy risks.
[1] Learning Deep Kernels for Non-parametric Two Sample Test. ICML 2020.
[2] Maximum Mean Discrepancy is Aware of Adversarial Attacks. ICML 2021.
[3] One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy. ICML 2025.
[4] Membership Inference Attack Should Move On to Distributional Statistics for Distilled Generative Models. ICML 2025 Workshop on Reliable and Responsible Foundation Models.
09:20–9:30 | Opening Remarks |
9:30–10:15 | Keynote Speech 1 |
Computational Safety for Generative AI
Dr. Pin-Yu Chen , Principal Research Scientist, IBM Thomas J. Watson Research Center |
|
10:15–10:30 | Introduction to ACM Transactions on AI Security and Privacy |
10:30-11:00 | Morning Tea Break |
11:00-11:30 | Session I: Adversarial Attacks and Robustness |
LLM Safeguard is a Double-Edged Sword: Exploiting False Positives for Denial-of-Service Attacks
Authors : Qingzhao Zhang, Ziyang Xiong, and Morley Mao (University of Michigan) |
|
Exploring the Robustness of Vision-Language-Action Models against Sensor Attacks
Authors : Xuancun Lu, Jiaxiang Chen, Shilin Xiao, Zizhi Jin (Zhejiang University), Ruochen Zhou (Hong Kong University of Science and Technology), Xiaoyu Ji, and Wenyuan Xu (Zhejiang University) |
|
11:30–12:00 | Session II: Large Vision Model Security |
When Vision Fails: Text Attacks Against ViT and OCR
Authors : Nicholas Boucher, Jenny Blessing, Ilia Shumailov (University of Cambridge), Ross Anderson (University of Cambridge and University of Edinburgh), and Nicolas Papernot (University of Toronto) |
|
Safety Assessment of 3D Generation Models in AR/VR Applications
Authors : Xi Tang, Wanlun Ma, Yinwei Bao (Swinburne University of Technology), Minhui Xue (CSIRO's Data61), Sheng Wen (Swinburne University of Technology), Yang Xiang (Digital Capability Research Platform, Swinburne University of Technology) |
|
12:00–14:15 | Lunch |
14:15–15:00 |
Keynote Speech 2
|
Statistics as a Compass for AI Security
Dr. Feng Liu , Senior Lecturer, University of Melbourne |
|
15:00–15:30 | Afternoon Tea Break |
15:30–16:00 | Session III: Secure Graph Learning and Application |
SPG: Ensuring Structural Privacy in Secure Graph Learning
Authors : Yiming Qin (Monash University), Shangqi Lai (CSIRO's Data61), Joseph Liu (Monash University), Cong Wang (City University Hong Kong), and Xingliang Yuan (The University of Melbourne) |
|
VAlign-GLAR: Graph Retrieval-Based Vulnerability Intelligence Alignment via Structured LLM-Guided Inference
Authors : Lihua Wang, Jiaojiao Jiang, Salil S. Kanhere (University of New South Wales), Jiamou Sun, Zhenchang Xing (CSIRO's Data61), and Sanjay Jha (University of New South Wales) |
|
16:00–16:25 | Session IV: Cybersecurity Threat Intelligence |
ThreatCompass: A Tool for Identifying and Mapping Security Issues to TTPs
Authors : Stefano Simonetto, Yannick Krijnen, Ronan Oostveen, Peter Bosch, and Willem Jonker (University of Twente) |
|
On Using LLMs for Vulnerability Classification
Authors : Rustam Talibzade, Idilio Drago, and Francesco Bergadano (University of Turin) |
|
16:25–16:30 | Concluding Remarks |
As Large AI Systems and Models (LAMs) become increasingly pivotal in a wide array of applications, their potential impact on the privacy and cybersecurity of critical infrastructure becomes a pressing concern. LAMPS is dedicated to addressing these unique challenges, fostering a dialogue on the latest advancements and ethical considerations in enhancing the privacy and cybersecurity of LAMs, particularly in the context of critical infrastructure protection.
LAMPS will bring together global experts to dissect the nuanced privacy and cybersecurity challenges posed by LAMs, especially in critical infrastructure sectors. This workshop will serve as a platform to unveil novel techniques, share best practices, and chart the course for future research, with a special emphasis on the delicate balance between advancing AI technologies and securing critical digital and physical systems.
Topics of interest include (but are not limited to):
Secure Large AI Systems and Models for Critical Infrastructure
Large AI Systems and Models' Privacy and Security Vulnerabilities
Data Anonymization and Synthetic Data
Human-Centric Large AI Systems and Models
Submitted papers must not substantially overlap with papers that have been published or simultaneously submitted to a journal or a conference with proceedings.
Submission link: https://ccs25-lamps.hotcrp.com
Only PDF files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers must guarantee that one of the authors will register and present the paper at the workshop. Proceedings of the workshop will be available on a CD to the workshop attendees and will become part of the ACM Digital Library.
The archival papers will be included in the workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
Authors are responsible for obtaining appropriate publication clearances. Attendance and presentation by at least one author of each accepted paper at the workshop are mandatory for the paper to be included in the proceedings.
For any questions, please contact one of the PC co-chairs Maggie Liu: xiaoning.liu@rmit.edu.au.
First | Last | Affiliation | Countries and Regions |
---|---|---|---|
Arpit | Garg | University of Adelaide | AU |
Bang | Wu | RMIT | AU |
Carsten | Maple | University of Warwick | GB |
Coby | Wang | Visa Research | US |
Guanhong | Tao | University of Utah | US |
He | Zhang | RMIT University | AU |
Hyungjoon | Koo | Sungkyunkwan University | KR |
Jiamou | Sun | CSIRO's Data61 | AU |
Jing | Xu | CISPA Helmholtz Center for Information Security | DE |
Kristen | Moore | CSIRO's Data61 | AU |
Linyi | Li | Simon Fraser University | CA |
Mainack | Mondal | Indian Institute of Technology Kharagpur | IN |
Marius | Fleischer | NVIDIA | US |
Minghong | Fang | University of Louisville | US |
Minxin | Du | The Hong Kong Polytechnic University | HK |
Renyang | Liu | National University of Singapore | SG |
Ryan | Sheatsley | University of Wisconsin-Madison | US |
Shang-Tse | Chen | National Taiwan University | TW |
Shuang | Hao | University of Texas at Dallas | US |
SM | Yiu | The University of Hong Kong, Hong Kong | HK |
Stjepan | Picek | Radboud University | NL |
Tao | Ni | City University of Hong Kong | CN |
Tian | Dong | Shanghai Jiao Tong University | CN |
Tianshuo | Cong | Tsinghua University | CN |
Veelasha | Moonsamy | Ruhr University Bochum | DE |
Wanlun | Ma | Swinburne University of Technology | AU |
Yongsen | Zheng | Nanyang Technological University | SG |
Yuanyuan | Yuan | ETH Zurich | CH |
Yufei | Chen | City University of Hong Kong | CN |
Yuxin | Cao | National University of Singapore | SG |
Zhiyuan | Zhang | Max Planck Institute for Security and Privacy | DE |
Zihan | Wang | University of Queensland | AU |
Zitao | Chen | University of British Columbia | CA |
Ziyao | Liu | Nanyang Technological University | SG |