In this session, the speaker aims to shed light on exploiting Large Language Models (LLMs) through adversarial attacks. The speaker will cover the common LLM use cases, highlight emerging threats, and delve into LLM adversarial attacks with practical examples to illustrate their impact. The presentation will introduce the concept of LLM red teaming, emphasizing its critical role in evaluating and enhancing LLM security for AI trustworthiness. Ultimately, this speech seeks to elevate the audience's understanding and encourage proactive strategies to safeguard against these adversarial threats.
TOPIC / TRACK
AI Security Forum
LOCATION
Taipei Nangang Exhibition Center, Hall 2
4F 4A
LEVEL
General General sessions explore new cybersecurity knowledge and non-technical topics, ideal for those with limited or no prior cybersecurity knowledge.
SESSION TYPE
Breakout Session
LANGUAGE
Chinese
SUBTOPIC
AI
AI Security
Compliance
CYBERSEC 2024 uses cookies to provide you with the best user experience possible. By continuing to use this site, you agree to the terms in our Privacy Policy 。