Lunch Learning Session allows attendees to gain insights from expert-led discussions while enjoying a light meal. This session offers a flexible way to learn, network, and exchange ideas, ensuring a productive and engaging conference experience.
After breaching the internal network, attackers exploit network devices as footholds to compromise switches, ultimately taking control of core network infrastructure to enable lateral movement. This presentation will analyze the attack chain and technical methodologies involved, while also exploring actionable strategies to prevent network devices from being weaponized by attackers.
Security issues with Active Directory have been discussed for many years. It's been 18 years since the "Pass The Hash" attack technique emerged. Have we really completely eliminated these security issues? For example, starting with Windows 11 24H2, NTLM authentication is being phased out, but does that mean Kerberos cannot be attacked? As enterprise architectures gradually shift toward hybrid identity authentication (such as Entra ID and SAML), these vulnerabilities seem to be merging into a larger attack surface.
In this session, we will review the history of Active Directory attacks over the years and introduce related technologies. We will explore various attack methods that arise at the intersection of AD and cloud-based Azure & Entra ID hybrid identity authentication. Using more relaxed and simple concepts, we aim to help everyone quickly understand these potential vulnerabilities and attack vectors, hoping to provide a more comprehensive understanding of these weaknesses to manage related risks within enterprises.
In recent years, the rapid development of LLMs has brought opportunities for innovation in various areas of an organization from customer services to decision-making. However, organizations lacking comprehensive security strategies may face the risks of data breaches, compromised AI models, or even the consequences of non-compliance and damaged reputation. Therefore, organizations need to take a systematic approach to their security defenses.
The “LEARN” framework is a 5-stage approach that provides comprehensive security management:
The "Layer" stage focuses on clarifying system boundaries to allow teams to see the risks of each component clearly and implement corresponding controls.
The "Evaluate" stage evaluates the potential impact on operations based on current workflows and confidentiality of data, taking into account regulatory requirements, to find out the areas where hardening should be prioritized. Creating inter-department communication channels early on can help resolve issues before they become bigger problems.
The "Act" stage turns plans into actions, including updating security measures, optimizing workflows, etc. Since LLM applications usually involve external users and third-party integrations, it is necessary to ensure that security measures can work automatically and issue alerts when anomalies occur.
The "Reinforce" stage verifies the effectiveness of security measures through continuous monitoring and regular testing. This includes collecting system usage logs, emulating attacks, etc. to ensure security defenses are working properly.
Finally, the "Nurture" stage focuses on building a security culture that ensures security awareness permeates the organization from bottom to top. Organizations need to be able to adapt to changes in the external environment by quickly adjusting internal guidelines and establishing new standards in daily operations.
With LEARN, organizations can innovate with LLMs while managing their risks properly, taking advantage of market opportunities while ensuring operational continuity. As technologies continue to evolve, this framework will also provide room for adjustment that helps organizations continuously improve their defenses in changing environments.
To identify a few unique binaries even worth the effort for human experts to analyze from large-scale samples, filter techniques for excluding those highly duplicated program files are essential to reduce the human cost within a restricted period of incident response, such as auto-sandbox emulation or AI detection engine. As VirusTotal reported in 2021 ~90% of 1.5 billion samples are duplicated but still require malware experts to verify due to obfuscation.
In this work, we proposed a novel neural-network-based symbolic execution LLM, CuIDA, to simulate the analysis strategies of human experts, such as taint analysis of the Use-define chain among unknown API calls. Our method can automatically capture the contextual comprehension of API and successfully uncover those obfuscated behaviors in the most challenging detection dilemma including (a.) dynamic API solver, (b.) shellcode behavior inference, and (c.) commercial packers detection WITHOUT unpacking.
We demonstrate the practicality of this approach on large-scale sanitized binaries which are flagged as obfuscated but few positives on VirusTotal. We surprisingly uncovered up to 67% of binaries that were missed by most vendors in our experiment, by the factor of those threats successfully abuse the flaw of VC.Net detection to evade the scan. Also, this approach shows the inference intelligence on behavior prediction for shellcode without simulation, instead, only by using the data-relationships on the stack to infer the relative unique behaviors involved in the payload.
Moreover, to explore the limitation of our transformer’s contextual comprehension on the obfuscation problem, we evaluate the transformer with state-of-the-art commercial packers, VMProtect and Themida. Our approach successfully forensics-based investigates the original behaviors of the running protected program without unpacking. Furthermore, this approach reveals a few unexpected findings of the protection strategies of the commercial packers themselves. In conclusion, our method explores the possibility of using LLM to sample the reversing experience, analysis strategies of human experts, and success in building robust AI agents on practical obfuscated code understanding.
Modern detection engines implement auto-sandbox or AI classification to classify input samples into specific malware types, such as virus, dropper etc. However, due to the complex landscape of modern warfare, attackers tend to design more sophisticated malware to evade detection. Furthermore, malware may incorporate multiple attack behaviors, making it inappropriate to classify them into specific categories. According to USENIX research in 2022, IT managers will receive more than 100K daily alerts, but 99% of them are false alerts by AV/EDR which makes it difficult to be aware of the real 1% attack happened without enough expert knowledge.
Due to the lack of explanation, detection engines often misclassify benign programs as malicious, further making end users untrust in detection results, leading them to manually override the detection result of AV/EDR and executed under a trusted status.
According to this pain point, we propose a new method of building an AI reversing expert based on Llama GPT. We let ChatGPT capture the decompilation knowledge as chain-of-thoughts (CoT) and leveraged Llama's inference intelligence for contextual comprehension of binary assembly, to build a reversing expert that successfully learned those reverse engineering strategies. Our AI model can identify specific malicious behaviors and explain the potential consequences and risks underlying. We demonstrate its effectiveness in large-scale threat hunting on VirusTotal, successfully detecting complex samples that are hard to defy as simple classification. At the end of this briefing, we will share a practical demo of our Neural Reversing Expert's capabilities in analyzing real-world samples.
Red Goes Purple: CTEM, BAS & MITRE ATT&CK in Real-World Red Team Ops
This talk dives into next-level Red Teaming, where CTEM and BAS aren’t checkboxes but offensive weapons. With cyber threats evolving, it’s time to move past outdated pentesting and systematically identify, exploit, and reduce attack surfaces before adversaries do.
At the core is MITRE ATT&CK, but most teams still treat it as a checklist. I’ll show you how to weaponize ATT&CK, integrating CTEM and BAS to expose blind spots, disrupt blue teams, and stress-test real-world defenses.
We’ll also explore Generative AI (GenAI) in offensive security—attackers are already using AI-driven polymorphic malware, automated recon, and adaptive social engineering. If you're not integrating GenAI into your ops, you’re already behind.
Expect hard-hitting case studies on evasion tactics, AI-assisted attacks, and turning threat intel into real adversary emulation. No fluff, no compliance talk—just raw Red Team strategies to push security beyond its limits. If you’re ready to hack smarter, move faster, and break defenses the right way, this session is for you.
This session will take a neutral stance, exploring the management and technical risks associated with using cloud services from both the client's and provider's perspectives. Aimed at cybersecurity professionals looking to get started with cloud security, the discussion will consider the challenges and experiences faced in practical operations, given the finite resources available to enterprises.
We will delve into common cloud technology issues and their solutions, analyzing real-world scenarios to highlight various usage risks. Topics will include experiences with distributed and centralized cloud management, identity and access management security, virtual network architecture, workload security, relevant cybersecurity frameworks, cloud storage service misconfigurations, resource status considerations, and practical experiences. Our goal is to provide insights into architectural design, compliance, and technical solutions.
The rapid development of generative AI technology introduces new security and compliance challenges. Relying solely on model providers is insufficient to mitigate these risks. This talk will present real-world cases to highlight potential threats and introduce the latest model protection techniques, such as Llama Guard.
Additionally, the session will explore security and compliance frameworks for deploying generative AI, covering key design considerations, implementation details, and real-world adoption cases. Attendees will learn how to integrate AI protection measures into system design and gain valuable insights into managing compliance risks.
Whether you are a decision-maker, cybersecurity expert, or architect, this session will provide essential knowledge on building a secure foundation in the era of widespread generative AI adoption.
CYBERSEC 2025 uses cookies to provide you with the best user experience possible. By continuing to use this site, you agree to the terms in our Privacy Policy 。