Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on AI safety and robustness. His long-term research vision is to build trustworthy machine learning systems. He received the IJCAI Computers and Thought Award in 2023. He is a co-author of the book “Adversarial Robustness for Machine Learning”. At IBM Research, he received several research accomplishment awards, including IBM Master Inventor, IBM Corporate Technical Award, and IBM Pat Goldberg Memorial Best Paper. His research contributes to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI(’22,’23,’24), IJCAI’21, CVPR(’20,’21,’23), ECCV’20, ICASSP(’20,’22,’23,’24), KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He has been an IEEE Fellow since 2025. He is currently on the editorial board of Transactions on Machine Learning Research and IEEE Transactions on Signal Processing. He is also an Area Chair or Senior Program Committee member for NeurIPS, ICLR, ICML, AAAI, IJCAI, and PAKDD, and a Distinguished Lecturer of ACM. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award. In 2025, he received the IEEE SPS Industry Young Professional Leadership Award.
Large language models (LLMs) and Generative AI (GenAI) are at the forefront of frontier AI research and technology. With their rapidly increasing popularity and availability, challenges and concerns about their misuse and safety risks are becoming more prominent than ever. In this talk, we introduce a unified computational framework for evaluating and improving a wide range of safety challenges in generative AI. Specifically, we will show new tools and insights to explore and mitigate the safety and robustness risks associated with state-of-the-art LLMs and GenAI models, including (i) safety risks in fine-tuning LLMs, (ii) LLM red-teaming and jailbreak mitigation, (iii) prompt engineering for safety debugging, and (iv) robust detection of AI-generated content.
As Large AI Systems and Models (LAMs) become increasingly pivotal in a wide array of applications, their potential impact on the privacy and cybersecurity of critical infrastructure becomes a pressing concern. LAMPS is dedicated to addressing these unique challenges, fostering a dialogue on the latest advancements and ethical considerations in enhancing the privacy and cybersecurity of LAMs, particularly in the context of critical infrastructure protection.
LAMPS will bring together global experts to dissect the nuanced privacy and cybersecurity challenges posed by LAMs, especially in critical infrastructure sectors. This workshop will serve as a platform to unveil novel techniques, share best practices, and chart the course for future research, with a special emphasis on the delicate balance between advancing AI technologies and securing critical digital and physical systems.
Topics of interest include (but are not limited to):
Secure Large AI Systems and Models for Critical Infrastructure
Large AI Systems and Models' Privacy and Security Vulnerabilities
Data Anonymization and Synthetic Data
Human-Centric Large AI Systems and Models
Submitted papers must not substantially overlap with papers that have been published or simultaneously submitted to a journal or a conference with proceedings.
Submission link: https://ccs25-lamps.hotcrp.com
Only PDF files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers must guarantee that one of the authors will register and present the paper at the workshop. Proceedings of the workshop will be available on a CD to the workshop attendees and will become part of the ACM Digital Library.
The archival papers will be included in the workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
Authors are responsible for obtaining appropriate publication clearances. Attendance and presentation by at least one author of each accepted paper at the workshop are mandatory for the paper to be included in the proceedings.
For any questions, please contact one of the PC co-chairs Maggie Liu: xiaoning.liu@rmit.edu.au.
First | Last | Affiliation | Countries and Regions |
---|---|---|---|
Arpit | Garg | University of Adelaide | AU |
Bang | Wu | RMIT | AU |
Carsten | Maple | University of Warwick | GB |
Coby | Wang | Visa Research | US |
Guanhong | Tao | University of Utah | US |
Hyungjoon | Koo | Sungkyunkwan University | KR |
Jiamou | Sun | CSIRO's Data61 | AU |
Jing | Xu | CISPA Helmholtz Center for Information Security | DE |
Kristen | Moore | CSIRO's Data61 | AU |
Linyi | Li | Simon Fraser University | CA |
Mainack | Mondal | Indian Institute of Technology Kharagpur | IN |
Marius | Fleischer | NVIDIA | US |
Minghong | Fang | University of Louisville | US |
Minxin | Du | The Hong Kong Polytechnic University | HK |
Ryan | Sheatsley | University of Wisconsin-Madison | US |
Shang-Tse | Chen | National Taiwan University | TW |
Shuang | Hao | University of Texas at Dallas | US |
SM | Yiu | The University of Hong Kong, Hong Kong | HK |
Stjepan | Picek | Radboud University | NL |
Tao | Ni | City University of Hong Kong | CN |
Tian | Dong | Shanghai Jiao Tong University | CN |
Tianshuo | Cong | Tsinghua University | CN |
Veelasha | Moonsamy | Ruhr University Bochum | DE |
Wanlun | Ma | Swinburne University of Technology | AU |
Yongsen | Zheng | Nanyang Technological University | SG |
Yuanyuan | Yuan | ETH Zurich | CH |
Yufei | Chen | City University of Hong Kong | CN |
Yuxin | Cao | National University of Singapore | SG |
Zhiyuan | Zhang | Max Planck Institute for Security and Privacy | DE |
Zitao | Chen | University of British Columbia | CA |
Ziyao | Liu | Nanyang Technological University | SG |