AI Security Forum
AI Security Forum

AI Security Forum will focus on safeguarding the security of artificial intelligence systems. Discussions will cover machine learning vulnerabilities, counter-adversarial attacks, and practical AI security strategies to ensure the stable operation of the digital intelligent world.

TIME & LOCATION
  • 5/16 (Thu.) 14:00 - 17:00 | 701B Meeting Room
AGENDA
5 / 14
14:00 - 15:00
Kai-Ping Hsu / Associate Director, Information Management Division Computer and Information Networking Center National Taiwan University
    5 / 14
    15:45 - 16:15
    Chen, Shu-Yuan / Data Scientist CyCraft Technology

    We will evaluate the capabilities of different models in the field of cybersecurity in Taiwan from various aspects (e.g., harmlessness and local). We will analyze the performance of different models such as TAIDE, Taiwan LLM, and the LLM - CyCraftGPT developed by CyCraft, understanding their applicability in addressing various issues, and helping the audience to choose suitable models more quickly in the future. 

    • AI
    • AI Security
    • Large Language Model
    5 / 14
    16:30 - 17:00
    Stanley Chou / CISO OneDegree

     In this session, the speaker aims to shed light on exploiting Large Language Models (LLMs) through adversarial attacks. The speaker will cover the common LLM use cases, highlight emerging threats, and delve into LLM adversarial attacks with practical examples to illustrate their impact. The presentation will introduce the concept of LLM red teaming, emphasizing its critical role in evaluating and enhancing LLM security for AI trustworthiness. Ultimately, this speech seeks to elevate the audience's understanding and encourage proactive strategies to safeguard against these adversarial threats.

    • AI
    • AI Security
    • Compliance
    5 / 16
    14:00 - 14:30
    Vic Huang / Member UCCU Hacker

    Since the advent of ChatGPT , generative AI has attracted everyone's attention. The sudden emergence of generative AI technologies has caught data protection regulators by surprise. As more countries investigate artificial intelligence companies like OpenAI, a clash between technological advances and privacy laws seems inevitable. For example, by March 2023, Italian data protection agency Garante stated that OpenAI's massive collection and storage of personal data to train chatbots lacked any legal basis, and accused OpenAI of failing to implement an age verification mechanism that required users to be over 13 years old. And after corrective measures operated by OpenAI, Italy reopened the ChatGPT service on April 28.

    From OWASP top 10 LLM application , OpenAI bug bounty and the popular prompt injection, there are security issues worth concerning in this area. But we also find a flaw in the privacy definition between the 3 popular AI module with the same question about personal information. As lots of professionals mentioned, privacy issue is also a critical issue in this area.

    This situation underscores the need for AI developers to reevaluate their data collection and use methods while complying with privacy laws. The privacy issues and considerations include:

    • General principles of data collection and processing: such as informed consent, limiting the storage time of data, ensuring that data is only used for specific purposes, and strengthening data security.
    • AI may be defined as Automated individual decision-making
    • Privacy by Design
    • Clarify role of Controller and Processor: It is popular to embed ChatGPT or other AI tools into third parties’ services or products, such as customer service chatbot. The third party using AI tool may become a data controller under GDPR while the OpenAI becomes data processor. Each of them would have different functions and burden protection responsibility.

    In the last part , this paper also reviews the privacy issues from technology and different data protection laws under different jurisdiction.

    • Privacy
    • AI
    • AI Security
    5 / 16
    14:45 - 15:15
    Jeremy Chiu (aka Birdman) / Founder & CTO CyCraft Technology

    Generative AI has swiftly infiltrated various industries, beginning to be applied in diverse facets of our daily lives. However, this new AI technology may feel unfamiliar to cybersecurity professionals. Yet, due to a shortage of manpower, there's an urgent need for various AI automation technologies to address tasks ranging from daily intelligence gathering, alert analysis, forensic reporting, and responding to various cybersecurity inquiries from clients. 

    At CyCraft, we have a robust AI research team. By leveraging our fine-tuned LLM technology, coupled with our new Corrective RAG AI technology, we integrate AI into cybersecurity processes in three key areas: Cyber News Intelligence Robot, Red Team Attack Simulation Robot, and Blue Team Incident Response Robot. We'll share practical experiences and insights through real-world case studies.

    • AI
    • Blue Team
    5 / 16
    15:45 - 16:15
    Daniel Yeung / Partner Technical Manager, North Asia Splunk

    For many of the world's largest, most complex organizations, Splunk is at the heart of security operations. We help CISOs and their teams quickly evade and respond to emerging threats when incidents inevitably occur, and successfully act as business enablers. But we also want to know what exactly global security leaders think about AI

    In the CISO report, we'll share the initial findings and provide insights on how leaders can evolve with the cybersecurity landscape

    • AI Security
    • Cyber Resilience
    5 / 16
    16:30 - 17:00
    Nick Cheng / Customer Engineer Google Cloud

    Summary :

    1. Introduce security risks related to Generative AI (e.g. Privacy, Data Security, Cloud Environment, Prompt Hacking)

    2. Introduce OWASP Top 10 for LLM Applications

    3. Introduce security use cases that can leverage AI technology (SOC, Malware Analysis, Code Review)

    • AI Security
    • Blue Team
    • Cloud Security