​Chia-Mu Yu

National Yang Ming Chiao Tung University / Associate Professor, Institute of Electrical and Computer Engineering

Chia-Mu Yu is currently an Associate Professor in the Department of Electrical Engineering at National Yang Ming Chiao Tung University (NYCU). He also holds joint associate professorships in the Department of Information Management and Finance, College of Artificial Intelligence, and the Industry Academia Innovation School at NYCU. His expertise lies in AI safety, data privacy, and network security. He has received numerous accolades, including the K. T. Li Pan-Shih Award, the National Science Council Young Scholar Cultivation Program (Columbus Program), the NYCU Young Chair Professorship, the K. T. Li Young Research Award, the Pan Wen-Yuan Research Grant, and the National Science Council Outstanding Young Scholar Program. He is currently a Senior Member of the IEEE and serves as an Senior Area Editor for IEEE Transactions on Information Forensics and Security, and an Associate Editor for IEEE Internet of Things Journal and IEEE Consumer Electronics Magazine.  

SPEECH
4/17 (Thu.) 16:15 - 17:00 1F 1B AI Security & Safety Forum Live Translation Session
Attacks and Defenses in Large Language Models: A Diverse Landscape

Large Language Models (LLMs) are increasingly being applied across diverse scenarios and platforms, reflecting their rising importance in today's technological landscape. Despite their growing prevalence, however, LLMs themselves remain relatively vulnerable at their core. Beyond the well-known attacks such as prompt injection and jailbreak, a variety of new offensive and defensive techniques targeting LLMs have emerged over the past year. Attackers continually devise innovative methods to circumvent model defenses, and even the original prompt injection and jailbreak attacks have evolved in new and unexpected ways.

These developments underscore the need for heightened vigilance when utilizing LLMs. The purpose of this talk is to convey up-to-date knowledge on LLM attacks and defenses, helping attendees gain a deeper understanding of how to protect these systems by implementing suitable security strategies. We will also briefly explore approaches for testing AI models, systems, and products. This is not merely a technical issue; it involves ensuring the security and reliability of LLMs in an ever-changing digital environment. By the end of this session, participants will have a clearer grasp of these challenges and be better prepared to handle various potential security concerns in their future work.