5/14 (Tue.) 16:30 - 17:00 4F 4A

LLM Adversarial Attacks – a threat that cybersecurity experts should not ignore

 In this session, the speaker aims to shed light on exploiting Large Language Models (LLMs) through adversarial attacks. The speaker will cover the common LLM use cases, highlight emerging threats, and delve into LLM adversarial attacks with practical examples to illustrate their impact. The presentation will introduce the concept of LLM red teaming, emphasizing its critical role in evaluating and enhancing LLM security for AI trustworthiness. Ultimately, this speech seeks to elevate the audience's understanding and encourage proactive strategies to safeguard against these adversarial threats.

Stanley Chou
SPEAKER
CISO
OneDegree

TOPIC / TRACK
AI Security Forum

LOCATION
Taipei Nangang Exhibition Center, Hall 2
4F 4A

LEVEL
General General sessions explore new cybersecurity knowledge and non-technical topics, ideal for those with limited or no prior cybersecurity knowledge.

SESSION TYPE
Breakout Session

LANGUAGE
Chinese

SUBTOPIC
AI
AI Security
Compliance