Large Language Models (LLMs) are increasingly being applied across diverse scenarios and platforms, reflecting their rising importance in today's technological landscape. Despite their growing prevalence, however, LLMs themselves remain relatively vulnerable at their core. Beyond the well-known attacks such as prompt injection and jailbreak, a variety of new offensive and defensive techniques targeting LLMs have emerged over the past year. Attackers continually devise innovative methods to circumvent model defenses, and even the original prompt injection and jailbreak attacks have evolved in new and unexpected ways.
These developments underscore the need for heightened vigilance when utilizing LLMs. The purpose of this talk is to convey up-to-date knowledge on LLM attacks and defenses, helping attendees gain a deeper understanding of how to protect these systems by implementing suitable security strategies. We will also briefly explore approaches for testing AI models, systems, and products. This is not merely a technical issue; it involves ensuring the security and reliability of LLMs in an ever-changing digital environment. By the end of this session, participants will have a clearer grasp of these challenges and be better prepared to handle various potential security concerns in their future work.
TOPIC / TRACK
AI Security & Safety Forum
Live Translation Session
LOCATION
Taipei Nangang Exhibition Center, Hall 2
1F 1B
LEVEL
Intermediate Intermediate sessions focus on
cybersecurity
architecture, tools, and practical applications, ideal for
professionals with a basic understanding of
cybersecurity.
SESSION TYPE
Breakout Session
LANGUAGE
Chinese
Real-Time Chinese & English Translation
SUBTOPIC
AI Safety
AI Security
LLM
CYBERSEC 2025 uses cookies to provide you with the best user experience possible. By continuing to use this site, you agree to the terms in our Privacy Policy 。