AI SecureOps: Attacking & Defending GenAI Applications and Services (In-Person)
2-day in-person course (starting Thursday 24th of April)
By 2026, Gartner, Inc. predicts that over 80% of enterprises will engage with GenAI models, up from less than 5% in 2023. This rapid adoption presents a new challenge for security professionals. To bring you up to speed, this training provides essential GenAI and LLM security skills through an immersive CTF-styled framework. Delve into sophisticated techniques for mitigating LLM threats, engineering robust defense mechanisms, and operationalizing LLM agents, preparing them to address the complex security challenges posed by the rapid expansion of GenAI technologies. You will be provided with access to a live playground with custom built AI applications replicating real-world attack scenarios.
The course focuses on safeguarding both public GenAI services and proprietary enterprise LLM solutions. This dual approach ensures comprehensive coverage of ""securing GenAI technologies"" alongside ""leveraging GenAI for enhancing security"". Mastering these two dimensions is crucial for developing sophisticated defense infrastructures in enterprise environments.
This dense training will navigate you through areas like the red and blue team strategies, create robust LLM defenses, incident response in LLM attacks, implement a Responsible AI(RAI) program and enforce ethical AI standards across enterprise services, with the focus on improving the entire GenAI supply chain.
This training will also cover the completely new segment of Responsible AI(RAI), ethics and trustworthiness in GenAI services. Unlike traditional cybersecurity verticals, these unique challenges such as bias detection, managing risky behaviors, and implementing mechanisms for tracking information are going to be the key challenges for enterprise security teams.
By the end of this training, you will be able to:
Break from prompts to code & command execution uncovering scenarios like cross site scripting, SQL injection, insecure agentic designs and abusing LLM capabilities for remote code execution for infrastructure takeover.
Conduct red-teaming on GenAI application using adversary simulation, LLM top 10 and MITRE Atlas frameworks, and apply AI security and ethical principles in real-world scenarios.
Execute and defend against adversarial attacks, including prompt injection, data poisoning, model inversion, agentic attacks and more.
Perform advanced AI red-teaming through multi-agent based auto-prompting attacks(attacking LLMs with LLMs).
Build LLM security scanners to protect injections, manipulations & risky behaviors as well as defending LLMs with LLMs.
Develop and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, benchmarking models for security and pen-testing of LLM Agents.
Use open-source tooling, HuggingFace, Langchain, OpenAI, NeMo, Streamlit and much more to craft your own tools and get up to speed with GenAI development.
Establish a comprehensive LLM SecOps process(assisted through GenAI), to secure the supply chain against adversarial attacks and perform a comprehensive threat model of enterprise applications.
Implementing an Incident response and risk management plan for enterprises building or utilizing GenAI services.
Course Overview
Introduction
Introduction to LLM and GenAI.
LLM & GenAI terminologies and architecture.
Technology use-cases.
Agents, multi-agents and multi-modal models.
Elements of AI Security
Understanding AI vulnerabilities with case studies on AI security breaches.
Application of Security.
Deploying and running LLM model locally.
Principles of AI ethics and Safety. - OWASP LLM top 10.
MITRE mapping of attacks on GenAI Supply chain.
Prompt Generation for solving specific security cases.
Build defense against local and global models.
Adversarial LLM Attacks and Defenses
Direct and Indirect Prompt Injection attacks.
Advance prompt injections through obfuscation and cross-model injections.
Breaking instruction boundaries and trust criteria.
Advance LLM red teaming: Automating multi-agent conversation to prompt inject models at scale.
Attacking LLM Agents for task manipulation and risky behavior.
Adversarial examples, training data extraction, model extraction, and data poisoning.
Attack mapping through LLM top 10 and MITRE Atlas frameworks.
Defense automation through prompt output validation using GenAI as well as static lists.
Benchmarking LLMs from generating insecure code or aid In carrying out cyber attacks.
Building Enterprise grade LLM defenses
Deploying LLM Security scanner, adding custom rules, prompt block lists and guardrails.
writing custom detection logic, trustworthiness check and filters.
Protecting RAG enabled GenAI agents from emitting sensitive data & confidential internal data.
Attack simulation and defense use-cases against financial fraud & agent manipulation.
Building LLM & GenAI SecOps process
Summarizing the learnings into building SecOps process.
Monitoring trustworthiness and safety of enterprise LLM applications.
Implementing NIST AI Risk management framework(RMF) for security monitoring.
In conclusion
Top 3 takeaways you will learn
Participants will gain expertise in identifying and countering advanced adversarial attacks and implementing their counter measures.
Skills to build and deploy comprehensive LLM defenses, including custom guardrails and security scanners, ensuring robust protection for both public and private AI services.
Knowledge in utilizing and deploying cutting-edge AI tools and models for security purposes, including RAG for custom LLM agent training and securing the AI supply chain.
Target Audience
Security professionals seeking to update their skills for the AI era.
Red & Blue team members.
AI Developers & Engineers interested in the security aspects of AI and LLM models.
Product Managers & Founders looking to strenthen their PoVs and models with security best practices.
Pre-requisites
Familiarity with AI and machine learning concepts is beneficial but not required.
Ability to run python codes and notebooks.
Familiarity with common GenAI applications like OpenAI.
What should students bring
Familiarity with AI and machine learning concepts is beneficial but not required.
API keys for OpenAI, Anthropic.
Google Colab account.
Complete the pre-training setup before the first day.
Everything is going to be hosted online, so no special hardware requirements.
What will students be provided with
One year access to a live interactive playground with various exercises to practice different attack and defense scenarios for GenAI and LLM applications.
""AI SecureOps"" Metal coin for CTF players.
Complete course guide containing 200+ pages in PDF format. It will contain step-by-step guidelines for all the exercises, labs, and a detailed explanation of concepts discussed during the training.
PDF versions of slides that will be used during the training.
Access to Slack channel for continued engagement, support and development.
Access to Github account for accessing custom-built source codes and tools.
Trainer Bio
Abhinav Singh is an esteemed cybersecurity leader & researcher with over a decade of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of "Metasploit Penetration Testing Cookbook" and "Instant Wireshark Starter," his contributions span patents, open-source tools, and numerous publications. Recognized in security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEFCON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of GenAI.