AI Security & Safety Forum
AI Security & Safety Forum

AI Security & Safety Forum explores the opportunities and challenges of AI in cybersecurity, covering model security, threat defense, automated analysis, and regulatory standards, shaping the future of AI security.

TIME & LOCATION
  • 4/15 (Tue.) 14:00 - 17:00 | 701B Meeting Room
  • 4/16 (Wed.) 12:00 - 12:30 | 4B Meeting Room
  • 4/16 (Wed.) 12:00 - 13:30 | 1B Meeting Room
  • 4/16 (Wed.) 14:00 - 16:45 | 701B Meeting Room
  • 4/17 (Thu.) 09:30 - 10:45 | 703 Meeting Room
  • 4/17 (Thu.) 09:30 - 17:00 | 1B Meeting Room
  • 4/17 (Thu.) 12:40 - 13:10 | 4B Meeting Room
AGENDA
4 / 15
14:00 - 14:30
7F 701B
Ken Tsai / GM Zyxel Networks

Generative AI is rapidly transforming industries and daily life, but it also introduces new risks. Small and medium-sized businesses (SMBs), with limited resources, face growing challenges in managing tasks like threat intelligence, alert analysis, and customer support. Their smaller systems are often vulnerable entry points for cyberattacks. Zyxel addresses these challenges with AI-driven innovations, including machine learning-based threat intelligence, the Nebula network management platform, and SecuReporter. These tools help SMBs enhance operational efficiency, strengthen security, and stay prepared for future.

  • AI Security
4 / 15
14:45 - 15:15
7F 701B
Ray Pai / Sales Engineer - APAC Forcepoint

AI-driven data classification and labeling technology enables enterprises to automatically identify, tag, and categorize internal data, ensuring that sensitive information receives the appropriate level of protection. By leveraging machine learning and behavioral analysis, AI enhances the distinction between confidential, internal, and public data, dynamically adjusting classification labels based on business needs.

When integrated with Risk-Adaptive Protection (RAP), the system conducts real-time risk assessments based on user behavior, automatically adjusting access permissions and security policies as needed. If anomalous activities are detected, protective measures are reinforced instantly.

This intelligent security mechanism not only enhances data management efficiency but also ensures that enterprises maintain a robust data security posture in an ever-evolving digital landscape.

  • Data Security
  • AI Security
  • Zero Trust Architecture
4 / 15
15:30 - 16:00
7F 701B
Coco Wang / Director, Security Engineering, APAC Channels SentinelOne

In today’s digital landscape, business resilience and cybersecurity are deeply interconnected. Organizations must safeguard their operations from an increasingly complex range of cyber threats while remaining agile and adaptable. Business resilience refers to an organization’s ability to recover quickly and continue operations after disruptions, while cybersecurity protects critical assets, data, and systems from malicious attacks.

Beyond having a solid cybersecurity strategy, organizations must implement a robust cybersecurity architecture and enforce strong protocols to effectively withstand potential attacks.

Equipping cybersecurity teams with the right tools is also essential to ensure they can respond swiftly and efficiently in the event of an incident.

By leveraging cutting-edge AI-powered technology, organizations can arm their cybersecurity teams with advanced tools to strengthen resilience. By combining human insights aided with machine efficient, businesses remain secure and adaptable in today’s fast-evolving, threat-laden environment.

  • AI Security
4 / 15
15:30 - 16:00
7F 701H
吳啟文 / 前副院長 國家資通安全研究院

資安是持續的風險管理,首先需掌握資安威脅趨勢,特別是AI技術發展所帶來的資安威脅,並接軌最新的資安推動策略,包括國際及我國等資安策略,在管理面邁向安治理架構,在技術面推動零信任架構,同時建立AI檢測機制,選擇可信賴的(Trustworthy)、負責任的(Accountable)AI產品與系統,以及透過AI技術來強化事前、事中及事後的資安防護。

  • AI Security
  • Governance
  • Zero Trust Architecture
4 / 15
16:15 - 17:00
7F 703
Chen, Shu-Yuan / Data Scientist, Data Science CyCraft Technology

Large Language Models (LLMs) have shown great potential in cybersecurity applications. However, to fully harness their value, inherent biases and stability issues in LLM-driven security assessments must be effectively addressed. This talk will focus on these challenges and present our latest research on improving evaluation frameworks.

Our study analyzes how LLMs can be influenced by the order of presented options during the assessment process, leading to biases. We propose ranking strategies and probabilistic weighting techniques that significantly improve scoring accuracy and consistency. Key topics covered in this talk include experimental design and observations on LLM biases, probability-based weighting adjustments, and methodologies for integrating results from multiple ranking permutations. Notably, through validation with the G-EVAL dataset, we demonstrate measurable improvements in model evaluation performance.

Whether you are conducting research on language models or working in cybersecurity technology and decision-making, this talk will provide valuable technical insights and practical takeaways.

  • AI
  • AI Security
  • LLM
4 / 15
16:15 - 16:45
7F 701B
Yi-Hsien Chen / Cyber Security Researcher CyCraft Technology

Cyber Threat Intelligence (CTI) plays a pivotal role in modern cybersecurity defense, providing critical insights into vulnerabilities, attacker profiles, attack tools, and Indicators of Compromise (IoCs). However, the traditional practice of analysts relying on unstructured text for report writing, while beneficial for interpersonal communication, results in inefficient and time-consuming intelligence management.

Despite STIX format and MITRE ATT&CK® matrix providing foundational infrastructure for standardized intelligence management, their high technical barriers have hindered widespread adoption. Our solution leverages Large Language Models to develop automated tools—CTI2STIX and CTI2MITREATT&CK—enabling seamless conversion from natural language intelligence to structured formats.

Furthermore, our system integrates multi-source intelligence reports, breaking down information silos and enhancing the comprehensiveness, efficiency, and accuracy of threat analysis, thereby providing organizations with more robust cybersecurity protection capabilities.

  • LLM
  • Machine Learning
  • Threat Intelligence
4 / 16
12:00 - 12:30
4F 4B
Shawn Tsai / Architect, SPN Trend Micro

In recent years, the rapid development of LLMs has brought opportunities for innovation in various areas of an organization from customer services to decision-making. However, organizations lacking comprehensive security strategies may face the risks of data breaches, compromised AI models, or even the consequences of non-compliance and damaged reputation. Therefore, organizations need to take a systematic approach to their security defenses. 

The “LEARN” framework is a 5-stage approach that provides comprehensive security management:  

The "Layer" stage focuses on clarifying system boundaries to allow teams to see the risks of each component clearly and implement corresponding controls. 

The "Evaluate" stage evaluates the potential impact on operations based on current workflows and confidentiality of data, taking into account regulatory requirements, to find out the areas where hardening should be prioritized. Creating inter-department communication channels early on can help resolve issues before they become bigger problems. 

The "Act" stage turns plans into actions, including updating security measures, optimizing workflows, etc. Since LLM applications usually involve external users and third-party integrations, it is necessary to ensure that security measures can work automatically and issue alerts when anomalies occur. 

The "Reinforce" stage verifies the effectiveness of security measures through continuous monitoring and regular testing. This includes collecting system usage logs, emulating attacks, etc. to ensure security defenses are working properly. 

Finally, the "Nurture" stage focuses on building a security culture that ensures security awareness permeates the organization from bottom to top. Organizations need to be able to adapt to changes in the external environment by quickly adjusting internal guidelines and establishing new standards in daily operations.  

With LEARN, organizations can innovate with LLMs while managing their risks properly, taking advantage of market opportunities while ensuring operational continuity. As technologies continue to evolve, this framework will also provide room for adjustment that helps organizations continuously improve their defenses in changing environments. 

  • LLM
  • AI Security
4 / 16
12:00 - 12:45
1F 1B
Yi-An Lin / Threat Researcher, PSIRT & Threat Research Team TXOne Network Inc.
Shenghao Ma / Team Lead, PSIRT and Threat Research Team TXOne Networks Inc.

To identify a few unique binaries even worth the effort for human experts to analyze from large-scale samples, filter techniques for excluding those highly duplicated program files are essential to reduce the human cost within a restricted period of incident response, such as auto-sandbox emulation or AI detection engine. As VirusTotal reported in 2021 ~90% of 1.5 billion samples are duplicated but still require malware experts to verify due to obfuscation. 

In this work, we proposed a novel neural-network-based symbolic execution LLM, CuIDA, to simulate the analysis strategies of human experts, such as taint analysis of the Use-define chain among unknown API calls. Our method can automatically capture the contextual comprehension of API and successfully uncover those obfuscated behaviors in the most challenging detection dilemma including (a.) dynamic API solver, (b.) shellcode behavior inference, and (c.) commercial packers detection WITHOUT unpacking.

We demonstrate the practicality of this approach on large-scale sanitized binaries which are flagged as obfuscated but few positives on VirusTotal. We surprisingly uncovered up to 67% of binaries that were missed by most vendors in our experiment, by the factor of those threats successfully abuse the flaw of VC.Net detection to evade the scan. Also, this approach shows the inference intelligence on behavior prediction for shellcode without simulation, instead, only by using the data-relationships on the stack to infer the relative unique behaviors involved in the payload.

Moreover, to explore the limitation of our transformer’s contextual comprehension on the obfuscation problem, we evaluate the transformer with state-of-the-art commercial packers, VMProtect and Themida. Our approach successfully forensics-based investigates the original behaviors of the running protected program without unpacking. Furthermore, this approach reveals a few unexpected findings of the protection strategies of the commercial packers themselves. In conclusion, our method explores the possibility of using LLM to sample the reversing experience, analysis strategies of human experts, and success in building robust AI agents on practical obfuscated code understanding.

  • AI
  • Endpoint Detection & Response
  • Threat Hunting
4 / 16
13:00 - 13:30
1F 1B
Yi-An Lin / Threat Researcher, PSIRT & Threat Research Team TXOne Network Inc.
Jair Chen / Senior Threat Researcher, PSIRT and Threat Research TXOne Networks Inc.

Modern detection engines implement auto-sandbox or AI classification to classify input samples into specific malware types, such as virus, dropper etc. However, due to the complex landscape of modern warfare, attackers tend to design more sophisticated malware to evade detection. Furthermore, malware may incorporate multiple attack behaviors, making it inappropriate to classify them into specific categories. According to USENIX research in 2022, IT managers will receive more than 100K daily alerts, but 99% of them are false alerts by AV/EDR which makes it difficult to be aware of the real 1% attack happened without enough expert knowledge.

Due to the lack of explanation, detection engines often misclassify benign programs as malicious, further making end users untrust in detection results, leading them to manually override the detection result of AV/EDR and executed under a trusted status.

According to this pain point, we propose a new method of building an AI reversing expert based on Llama GPT. We let ChatGPT capture the decompilation knowledge as chain-of-thoughts (CoT) and leveraged Llama's inference intelligence for contextual comprehension of binary assembly, to build a reversing expert that successfully learned those reverse engineering strategies. Our AI model can identify specific malicious behaviors and explain the potential consequences and risks underlying. We demonstrate its effectiveness in large-scale threat hunting on VirusTotal, successfully detecting complex samples that are hard to defy as simple classification. At the end of this briefing, we will share a practical demo of our Neural Reversing Expert's capabilities in analyzing real-world samples.

  • Malware Protection
  • Reverse Engineering
  • AI
4 / 16
14:00 - 14:30
7F 701B
Nick Cheng / Customer Engineer Google Cloud

This presentation focuses on the security scenarios of generative AI, analyzing its unique security challenges and protections. We will delve into the application scenarios of generative AI in various fields, from content generation and code development to data analysis, analyzing potential security risks such as prompt injection and jailbreaking.

In addition, we will share practical cases, demonstrating best practices for secure generative AI applications, and explore the importance of trustworthy AI, ensuring the fairness, transparency, and reliability of AI systems.

  • AI Security
  • AI Security
  • AI
4 / 16
14:45 - 15:15
7F 701B
Jones Leung / Head of Solution Engineering Asia Zscaler
  • Zero Trust Architecture
4 / 16
15:30 - 16:00
7F 701B
萬幼筠 (Thomas Wan) / 法律系兼任助理教授 國立政治大學 總經理 安永管理顧問公司

人工智慧系統的發展自 2022 年後開始爆發性成長,而快速滲透進各種產業的應用場景。但是人工智慧在經濟發展與生產力提升之餘,產官學界也開始深度關切人工智慧的進展,逐漸朝向超級智慧 (SuperIntelligence) 迫近之時,吾人是否有足夠的方法來克服人工智慧系統帶來的諸項風險。

目前國際間對人工智慧治理除了着重在規範治理之外,亦有人工智慧對齊與驗測的方法來緩解人工智慧系統的各類安全與可信賴風險。國際趨勢如何發展與我國如何對應,將深切影響我國人工智慧產業的未來。

  • AI Safety
  • AI Security
  • Responsible AI
4 / 17
09:30 - 10:00
1F 1B
Jay Liao / Senior Technical Manager, AI Lab Trend Micro

As generative AI becomes increasingly popular, a myriad of applications are springing up rapidly. However, what severe consequences could arise if such powerful AI is exploited by hackers? The corresponding attack technique, Prompt Injection, has topped the OWASP AI security issues ranking for two consecutive years.

This presentation will delve deeply into the attack methods of Prompt Injection, from the users of generative AI to internal systems, analyzing which stages may be vulnerable to attacks, and how to safely use generative AI.

  • AI
  • AI Safety
4 / 17
09:30 - 10:00
7F 703
Kuan-Lun Liao / Data Scientist, Data Science CyCraft Technology

Three major challenges currently hinder threat intelligence: the diversity of intelligence sources leads to inconsistent formats, open-source intelligence often lacks completeness, and establishing relationships between intelligence entities remains difficult. In response, this session presents an innovative solution that integrates Large Language Models (LLMs) with Knowledge Graph technology to construct a comprehensive threat intelligence analysis framework. This approach features three key advantages: (1) leveraging LLMs to automatically construct knowledge graphs, enabling the standardization of heterogeneous intelligence data; (2) utilizing knowledge graph-enhanced Retrieval-Augmented Generation (RAG) to uncover hidden intelligence patterns and provide explainable relationships; and (3) automating the enrichment of missing intelligence, improving data completeness.

Beyond extracting entities from threat intelligence, this method also identifies latent relationships between entities, constructing a holistic view of the threat landscape through the knowledge graph. More importantly, the entire system is built on open-source models and frameworks, ensuring accessibility and flexibility. This talk will explore how to apply this innovative approach to intelligence collection and analysis in real-world scenarios.

  • Threat Intelligence
  • LLM
  • Knowledge Graph
4 / 17
10:15 - 10:45
7F 703
Yuki Hung / Cyber Security Researcher CyCraft Technology

In today's digital environment, organizations often fail to detect in real-time when their data is leaked and sold online. Our goal is to shorten the time gap between the exposure of data on the internet and its detection by the public, thereby minimizing the duration in which sensitive corporate data remains exposed. The dark web serves as a primary marketplace for trading personal information and can be accessed securely through browsers like Tor browser. This paper focuses on web crawling of dark web sites. Utilizing data collected from these sites, we trained a BERT classification model to categorize transaction posts into five different types of data breaches. This enables rapid identification of the type of leak each post pertains to. Finally, we employ a Retrieval-Augmented Generation (RAG) approach to gain insights from the dark web.

  • Data Leak
  • Incident Response
  • AI
4 / 17
10:15 - 10:45
1F 1B
Patrick Kuo / Senior Threat Researcher, Threat Reasearch TXOne Networks Inc.
Daniel Chiu / Threat Research Manager TXOne Networks Inc.

In this session, we’ll explore how Artificial Intelligence (AI) can enhance cybersecurity by extracting attack vector linked to vulnerabilities, offering a more proactive and efficient approach. Traditional methods of detecting vulnerabilities rely on security researchers manually reverse-engineering attack traffic and emulating potential attack behaviors. While effective, this process is time-consuming and exposes systems to risk during testing, increasing the likelihood of compromise in production environments.

AI addresses this challenge by automating the detection of attack vector and behaviors tied to specific vulnerabilities. This capability enables security teams to identify suspicious activities without constant manual intervention or exposing live systems. By integrating AI into vulnerability prevention, organizations can reduce the risk of attacks in production environments. AI-driven systems can autonomously flag suspicious behaviors or protocols indicative of an active threat.

This AI-powered approach enhances vulnerability prevention, offering stronger and more automated protection, reducing the potential for system compromise and providing a higher level of security.

  • AI
  • Vulnerability Assessment
  • Threat Intelligence
4 / 17
11:00 - 11:30
1F 1B
Ian / Second Class Officer Taiwan Cooperative Bank

There has been extensive discussion in Taiwan regarding the application of Artificial Intelligence (AI) in security defense. However, the security challenges faced by AI models have received comparatively less attention. This presentation will use the OWASP ML Top 10 to explore common security risks in machine learning, incorporating practical demonstrations of Deep Neural Network (DNN) attacks to thoroughly explain the principles behind each attack.

The presentation will cover the following topics: input data attacks (such as adversarial sample generation), data manipulation attacks (data poisoning), model inversion attacks, model stealing, and AI supply chain attacks. Through these cases, the audience will gain a clear understanding of how each security risk operates, enabling them to design effective defense and detection mechanisms.

  • AI
  • AI Safety
  • AI Security
4 / 17
11:45 - 12:15
1F 1B
Canaan Kao / Threat Research Director, Threat Research TXOne Networks Inc.

Using artificial intelligence to generate IPS rules has excellent potential to enhance network security, especially in detecting complex and evolving threats. However, it is not a panacea. AI models can generate too broad or specific rules, leading to false positives (over-alarming) or false negatives (missing threats). Many AI-generated rules may degrade the performance of IDS, especially in high-throughput networks. Based on the evaluation, a hybrid approach combining the strengths of AI and human expertise may be the most suitable approach for generating AI-driven IPS rules.

  • Intrusion Detection
  • AI
  • Network Security
4 / 17
12:40 - 13:10
4F 4B
Chu, Hua-Rong / Deputy Senior Researcher, Cloud Computing Laboratory Chunghwa Telecom Laboratories

The rapid development of generative AI technology introduces new security and compliance challenges. Relying solely on model providers is insufficient to mitigate these risks. This talk will present real-world cases to highlight potential threats and introduce the latest model protection techniques, such as Llama Guard.

Additionally, the session will explore security and compliance frameworks for deploying generative AI, covering key design considerations, implementation details, and real-world adoption cases. Attendees will learn how to integrate AI protection measures into system design and gain valuable insights into managing compliance risks.

Whether you are a decision-maker, cybersecurity expert, or architect, this session will provide essential knowledge on building a secure foundation in the era of widespread generative AI adoption.

  • AI Safety
  • AI Security
  • Compliance
4 / 17
12:40 - 13:10
1F 1B
叢培侃 (PK) / 資安長暨共同創辦人 奧義智慧科技
  • AI
  • Threat Exposure Management
  • Data Leak Prevention
4 / 17
14:00 - 14:30
1F 1B
Jr-Wei Huang / Senior Threat Researcher, PSIRT and Threat Research TXOne Networks Inc.

In this AI revolution, various Transformer-based models have successfully brought AI intelligence into everyday life and commercial applications through GPT-powered chatbots. This surge has led top-tier cybersecurity solutions to demonstrate that automated forensics and network management assistant chatbots can effectively support security investigations and response needs in practice, such as Defender Copilot. However, LLMs still struggle with their inherent hallucination issue, and their abilities can't fully address unexpected attacks from real-world threats.

Therefore, can we develop an AI detection engine that operates without human interaction, enabling 24/7 full-scope monitoring without the need for network administrators or forensic analysts? The vision is to deploy a pre-trained, on-premises AI agent capable of autonomously performing reverse engineering, reasoning, identification, and automated response in real time—without human intervention. This concept represents a new approach to next-generation endpoint detection and protection. Can we absorb the expertise of reverse engineers into a specialized AI model by leveraging large-scale samples?"

In this session, we will take the audience on a journey through academic research in pursuit of autonomous reverse engineering. We will explore how to transition from classic Attention-based Neural Machine Translation (NMT) models to AI agents with symbolic understanding and reasoning capabilities, ultimately training them as practical endpoint detection and reverse reasoning engines.

  • Machine Learning
  • Reverse Engineering
  • Security Operation
4 / 17
14:00 - 14:30
7F 701G
鄭欣明 / 副署長 數位發展部資通安全署 資訊工程系 教授 國立臺灣科技大學

本演講將探討O-RAN專網的安全威脅,在O-RAN架構中,Near-RT RIC(Near-Real-Time RAN Intelligent Controller)平台可用於部署AI模型,以應對新型態的惡意流量攻擊。我們會著重於AI防禦模型的應用實例,說明如何抵禦O-RAN環境中的開放介面攻擊,以及駭客部署的惡意元件。這些威脅嚴重影響專網的正常運作,導致網路資源遭濫用或被控制。此外,我們也將探討AI模型如何偵測從UE端發起的惡意攻擊,透過聯邦式學習來達成跨專網、跨電信的聯防機制,以確保專網的穩定與安全。

  • 5G Security
  • Network Security
  • AI
4 / 17
14:45 - 15:15
1F 1B
Stanley Chou / Director of Security Engineering Coupang

With the rapid iteration of Large Language Models (LLM) reasoning models and AI Agents, LLMs have been becoming critical technology components driving efficiency and innovation across industries. However, the complexity of the use cases and AI risks pose significant challenges for organizations adopting LLM technologies.

This sharing will explore the challenges of LLM risk evaluation and introduce the LLM-as-a-Judge framework—an innovative approach that leverages LLMs to evaluate, identify, and further mitigate risks of LLM systems. The speaker will provide an in-depth analysis of LLM-as-a-Judge’s architecture and key success factors, offering insights into how organizations can enhance AI system's security and trustworthiness through advanced LLM evaluation methodologies. This session aims to establish a solid foundation for organizations in AI risk management, ensuring safe, reliable and trustworthy AI system deployments.

  • LLM
  • AI Safety
  • Responsible AI
4 / 17
15:30 - 16:00
1F 1B
Shenghao Ma / Team Lead, PSIRT and Threat Research Team TXOne Networks Inc.

Malware Rules - cornerstone of modern security solutions, also as researcher's nightmare. Although it has the characteristics of low false positives and high accuracy, but requires analysts to spend time WEARYGNG GLASSES to find unique strings in binary as pattern to write for detection. Such as it consumes expert time and has become a major pain point for the current security industry. Therefore, whether artificial intelligence can be introduced to solve the problem of writing patterns on large-scale malware has become a consensus issue that the industry is looking forward to, and has also become a hot academic topic of cybersecurity.

In this session, we will start with two innovative studies conducted by AAAAI based on NVIDIA's top-level seminar on how to slice malware binary into semantic sub-patterns from the perspective of Ngram, and extract those high-entropy and developer-specific strings as rules to be effectively detected by a convolutional vision strategy. with a detection rate of 98% in a double-blind test of 800,000 samples, as excellent semantic detection performance. At the end of the session, we summarised the advantages, disadvantages and limitations of this method in products to help the audience to have a strong interest and understanding of this kind of detection technology. 

  • AI
  • Machine Learning
  • Endpoint Detection & Response
4 / 17
16:15 - 17:00
1F 1B
​Chia-Mu Yu / Associate Professor, Institute of Electrical and Computer Engineering National Yang Ming Chiao Tung University

Large Language Models (LLMs) are increasingly being applied across diverse scenarios and platforms, reflecting their rising importance in today's technological landscape. Despite their growing prevalence, however, LLMs themselves remain relatively vulnerable at their core. Beyond the well-known attacks such as prompt injection and jailbreak, a variety of new offensive and defensive techniques targeting LLMs have emerged over the past year. Attackers continually devise innovative methods to circumvent model defenses, and even the original prompt injection and jailbreak attacks have evolved in new and unexpected ways.

These developments underscore the need for heightened vigilance when utilizing LLMs. The purpose of this talk is to convey up-to-date knowledge on LLM attacks and defenses, helping attendees gain a deeper understanding of how to protect these systems by implementing suitable security strategies. We will also briefly explore approaches for testing AI models, systems, and products. This is not merely a technical issue; it involves ensuring the security and reliability of LLMs in an ever-changing digital environment. By the end of this session, participants will have a clearer grasp of these challenges and be better prepared to handle various potential security concerns in their future work.

  • AI Safety
  • AI Security
  • LLM
SPEAKERS

More speakers and agenda details will be announced soon.