Yi-An Lin

TXOne Network Inc. / Threat Researcher, PSIRT & Threat Research Team

Yi-An Lin is currently working at TXOne Network Inc. as threat researcher, responsible for threat intelligence, threat hunting related topics, and also has interesting in AI/ML and finding vulnerabilities. She has spoken at Black Hat USA, Black Hat MEA, CODEBLUE, CYBERSEC, and Cloud Summit Taiwan before.

SPEECH
4/15 (Tue.) 14:45 - 15:15 4F 4C Threat Research Forum Live Translation Session
Black Hat Update Techniques: Exploiting Overtrusted System Updates to Weaken All Security Defenses

Oh no… Windows Update again? System updates have long been a headache for users, disrupting workflows and breaking control over their machines. But what if we told you that top-tier security solutions share the same pain?

Inspired by the Black Hat USA research "Windows Downgrade Attacks using Windows Updates", we conducted an in-depth analysis of how real-world security solutions handle these attack techniques, revealing a critical gap in protection: inconsistencies in how security products interpret and enforce defenses across three key layers—registry settings, running processes, and disk files—ultimately exposing an entirely new attack surface.

In this talk, we’ll take a deep dive into Windows 11’s latest Trusted Installer-based update architecture, exposing its structural weaknesses and the security blind spots between upgrade mechanisms and endpoint protection. We'll analyze how adversaries manipulate event logs to exploit misalignments in system-to-security communications, ultimately forging unprotected registry and disk artifacts to hijack the upgrader’s identity. The result? A fully weaponized "arbitrary update" technique, allowing attackers to repurpose antivirus software as a backdoor execution tool.

4/16 (Wed.) 12:00 - 12:45 1F 1B AI Security & Safety Forum Lunch Learning Session
Attention Is All You Need for Semantics Detection: A Novel Transformer on Neural-Symbolic Approach

To identify a few unique binaries even worth the effort for human experts to analyze from large-scale samples, filter techniques for excluding those highly duplicated program files are essential to reduce the human cost within a restricted period of incident response, such as auto-sandbox emulation or AI detection engine. As VirusTotal reported in 2021 ~90% of 1.5 billion samples are duplicated but still require malware experts to verify due to obfuscation. 

In this work, we proposed a novel neural-network-based symbolic execution LLM, CuIDA, to simulate the analysis strategies of human experts, such as taint analysis of the Use-define chain among unknown API calls. Our method can automatically capture the contextual comprehension of API and successfully uncover those obfuscated behaviors in the most challenging detection dilemma including (a.) dynamic API solver, (b.) shellcode behavior inference, and (c.) commercial packers detection WITHOUT unpacking.

We demonstrate the practicality of this approach on large-scale sanitized binaries which are flagged as obfuscated but few positives on VirusTotal. We surprisingly uncovered up to 67% of binaries that were missed by most vendors in our experiment, by the factor of those threats successfully abuse the flaw of VC.Net detection to evade the scan. Also, this approach shows the inference intelligence on behavior prediction for shellcode without simulation, instead, only by using the data-relationships on the stack to infer the relative unique behaviors involved in the payload.

Moreover, to explore the limitation of our transformer’s contextual comprehension on the obfuscation problem, we evaluate the transformer with state-of-the-art commercial packers, VMProtect and Themida. Our approach successfully forensics-based investigates the original behaviors of the running protected program without unpacking. Furthermore, this approach reveals a few unexpected findings of the protection strategies of the commercial packers themselves. In conclusion, our method explores the possibility of using LLM to sample the reversing experience, analysis strategies of human experts, and success in building robust AI agents on practical obfuscated code understanding.

4/16 (Wed.) 13:00 - 13:30 1F 1B AI Security & Safety Forum Lunch Learning Session
Why do We Need Signature, if I can bring you a Neural-Experts by LLM

Modern detection engines implement auto-sandbox or AI classification to classify input samples into specific malware types, such as virus, dropper etc. However, due to the complex landscape of modern warfare, attackers tend to design more sophisticated malware to evade detection. Furthermore, malware may incorporate multiple attack behaviors, making it inappropriate to classify them into specific categories. According to USENIX research in 2022, IT managers will receive more than 100K daily alerts, but 99% of them are false alerts by AV/EDR which makes it difficult to be aware of the real 1% attack happened without enough expert knowledge.

Due to the lack of explanation, detection engines often misclassify benign programs as malicious, further making end users untrust in detection results, leading them to manually override the detection result of AV/EDR and executed under a trusted status.

According to this pain point, we propose a new method of building an AI reversing expert based on Llama GPT. We let ChatGPT capture the decompilation knowledge as chain-of-thoughts (CoT) and leveraged Llama's inference intelligence for contextual comprehension of binary assembly, to build a reversing expert that successfully learned those reverse engineering strategies. Our AI model can identify specific malicious behaviors and explain the potential consequences and risks underlying. We demonstrate its effectiveness in large-scale threat hunting on VirusTotal, successfully detecting complex samples that are hard to defy as simple classification. At the end of this briefing, we will share a practical demo of our Neural Reversing Expert's capabilities in analyzing real-world samples.