Join us at this forum to explore the ways in which machine learning can enhance cybersecurity protection. Learn about the real-life applications of AI/ML in threat intelligence analysis from data scientists, and discover how AI and ML are revolutionizing cybersecurity with their diverse applications.
The volume of threat intelligence is enormous and cannot be resolved simply through manual labor. Furthermore, much of the information is unstructured and thus unsuitable for machine-based analysis. Therefore, automating the process of applying threat intelligence using natural language processing has become a widely discussed topic. With the emergence of ChatGPT in recent times, the application of natural language models is being reexamined. In this session, we will review the four steps of the threat intelligence processing workflow, namely 1) threat intelligence collection and distribution, 2) threat intelligence analysis, 3) threat intelligence normalization, and 4) integration report writing. We will describe how natural language models can assist information security analysts in reducing processing time in each of these steps.
This presentation will delve into the need for semiconductors, notably flash memory and microcontrollers, developed from their inception, that address the ever-growing need to secure products, systems and data surrounding AI. The presentation will ask, then answer these core questions: What are the chip challenges designers face in the age of AI? Which particular security issues – now and in the future – will determine how semiconductors and AI work in unison? What anecdotes illustrate when and how AI-infused designs’ security was ensured through such chips. How do industry-standards bodies, such as ISO, contribute to validating the security characteristics of devices used with AI?
When using a sandbox, we expect to gain as much information as possible through dynamic analysis, including behavior, file modifications, and external machine interactions. However, the amount of information is vast and low-level, and during analysis, higher-level information such as which family it belongs to and which ATT&CK attack techniques are used are desired. In existing sandbox implementations, analysts use predefined rules, such as combinations of specific APIs or strings, extracted from the analyzed information. These rules are effective but time-consuming and effortful to produce, and they are also more specific. In this talk, I will share how we use APIs and dynamic string results generated by the sandbox, combined with malicious program families and ATT&CK tags produced by predefined rules as training data, to identify hidden relationships different from the predefined rules among samples marked as the same type. We feed these results back to the sandbox as new rules, achieving the goal of automatically generating rules.
CYBERSEC 2023 uses cookies to provide you with the best user experience possible. By continuing to use this site, you agree to the terms in our Privacy Policy .