AI Security Forum will focus on safeguarding the security of artificial intelligence systems. Discussions will cover machine learning vulnerabilities, counter-adversarial attacks, and practical AI security strategies to ensure the stable operation of the digital intelligent world.
We will evaluate the capabilities of different models in the field of cybersecurity in Taiwan from various aspects (e.g., harmlessness and local). We will analyze the performance of different models such as TAIDE, Taiwan LLM, and the LLM - CyCraftGPT developed by CyCraft, understanding their applicability in addressing various issues, and helping the audience to choose suitable models more quickly in the future.
In this session, the speaker aims to shed light on exploiting Large Language Models (LLMs) through adversarial attacks. The speaker will cover the common LLM use cases, highlight emerging threats, and delve into LLM adversarial attacks with practical examples to illustrate their impact. The presentation will introduce the concept of LLM red teaming, emphasizing its critical role in evaluating and enhancing LLM security for AI trustworthiness. Ultimately, this speech seeks to elevate the audience's understanding and encourage proactive strategies to safeguard against these adversarial threats.
Since the advent of ChatGPT , generative AI has attracted everyone's attention. The sudden emergence of generative AI technologies has caught data protection regulators by surprise. As more countries investigate artificial intelligence companies like OpenAI, a clash between technological advances and privacy laws seems inevitable. For example, by March 2023, Italian data protection agency Garante stated that OpenAI's massive collection and storage of personal data to train chatbots lacked any legal basis, and accused OpenAI of failing to implement an age verification mechanism that required users to be over 13 years old. And after corrective measures operated by OpenAI, Italy reopened the ChatGPT service on April 28.
From OWASP top 10 LLM application , OpenAI bug bounty and the popular prompt injection, there are security issues worth concerning in this area. But we also find a flaw in the privacy definition between the 3 popular AI module with the same question about personal information. As lots of professionals mentioned, privacy issue is also a critical issue in this area.
This situation underscores the need for AI developers to reevaluate their data collection and use methods while complying with privacy laws. The privacy issues and considerations include:
In the last part , this paper also reviews the privacy issues from technology and different data protection laws under different jurisdiction.
Generative AI has swiftly infiltrated various industries, beginning to be applied in diverse facets of our daily lives. However, this new AI technology may feel unfamiliar to cybersecurity professionals. Yet, due to a shortage of manpower, there's an urgent need for various AI automation technologies to address tasks ranging from daily intelligence gathering, alert analysis, forensic reporting, and responding to various cybersecurity inquiries from clients.
At CyCraft, we have a robust AI research team. By leveraging our fine-tuned LLM technology, coupled with our new Corrective RAG AI technology, we integrate AI into cybersecurity processes in three key areas: Cyber News Intelligence Robot, Red Team Attack Simulation Robot, and Blue Team Incident Response Robot. We'll share practical experiences and insights through real-world case studies.
For many of the world's largest, most complex organizations, Splunk is at the heart of security operations. We help CISOs and their teams quickly evade and respond to emerging threats when incidents inevitably occur, and successfully act as business enablers. But we also want to know what exactly global security leaders think about AI
In the CISO report, we'll share the initial findings and provide insights on how leaders can evolve with the cybersecurity landscape
Summary :
1. Introduce security risks related to Generative AI (e.g. Privacy, Data Security, Cloud Environment, Prompt Hacking)
2. Introduce OWASP Top 10 for LLM Applications
3. Introduce security use cases that can leverage AI technology (SOC, Malware Analysis, Code Review)
CYBERSEC 2024 uses cookies to provide you with the best user experience possible. By continuing to use this site, you agree to the terms in our Privacy Policy 。