AI Security Training
This advanced course equips cybersecurity professionals and AI/ML engineers with practical, hands-on skills to secure AI systems across their entire lifecycle. Through live instruction, group work, and lab-based simulations, participants will learn to model AI-specific threats, design secure architectures, defend against adversarial attacks, monitor for anomalies, and respond to real-world security incidents.
Advanced level
Specialized knowledge
English language
Next Start Date
27 November 2025
Online Instructor-led Training
4 weeks training | 4 hours per week | 16 hours in total | on Thursdays from 16:00 to 20:00
Course price: CHF 1'590 incl. materials, licence fees, and internal examination.
After completing the course, the Swiss Cyber Institute provides the "AI Security Engineer" certificate to all successful participants in an electronic format.
Master AI Security Now!
Events:
Online.
Onsite.

Course objectives
By the end of the course you will be able to:
- Model AI-specific threats, identifying vulnerabilities like data poisoning and evasion.
- Design secure AI systems using least privilege, sandboxing, and integrity controls.
- Execute adversarial attacks and deploy effective defenses such as adversarial training.
- Set up real-time monitoring and anomaly detection mapped to compliance standards.
- Create incident response playbooks, assign roles, and coordinate containment.
- Conduct forensic analysis on AI artifacts to produce reports and recommend improvements.
Study Content
Live interactive learning, guided by international AI academics and researchers with hand-on training exercises.
The course content is grounded in leading industry and academic practices for AI security and is aligned with frameworks and standards such as NIST, ISO 42001, and MITRE ATLAS.
- Why threat modeling matters for AI: differences from traditional IT, key concepts (assets, attackers, attack surface).
- Walk through STRIDE, PASTA, OCTAVE tailored to AI workflows.
- Common AI vulnerabilities (data poisoning, model evasion, inference attacks) and mapping them into threat models.
- Hands-on group exercise: build a threat model for a sample AI service in VMs.
- Core principles (least privilege, defense-in-depth, fail-secure) applied to data, model, and inference components.
- Design patterns (sandboxing inference, model signing, encrypted training data) and when to use them.
- Integrating security gates into CI/CD for AI: data validation, model testing, code reviews.
- In groups, design and present a secure end-to-end AI pipeline using provided VM sandbox.
- FGSM, PGD, poisoning, backdoors: how they work under the hood.
- Defenses like adversarial training, input sanitization, model hardening.
- Measuring robustness: accuracy vs. robustness trade-off, certified defenses, benchmarks.
- Red-team/blue-team exercise: attack a model, then implement and test defenses.
- Logging strategies, telemetry for AI workloads, ML-powered anomaly detection.
- Building IR plans for AI incidents, roles/responsibilities, communication flows.
- Collecting and preserving AI evidence: model artifacts, logs, memory snapshots.
- Team-based exercise: detect a live compromise, contain it, and produce a forensic report.
Prerequisites
Understanding of basic security principles (confidentiality, integrity, availability), common threats, and risk management practices.
Basic knowledge of how ML models are trained, validated, and deployed (e.g., supervised learning, model lifecycle).
Experience working with Linux-based environments, command-line interfaces, and virtual machines.
Proficiency in Python is recommended, especially for interacting with ML libraries (e.g., scikit-learn, PyTorch, TensorFlow).
Understanding of networks, APIs, cloud-based deployments, and software development life cycles (SDLC).
Target audience
Security engineers, analysts, and architects seeking to extend their expertise into AI-specific threat modeling, secure design, and defense strategies.
Machine learning developers and data scientists who need to understand the security risks and defensive mechanisms associated with deploying AI systems.
SOC analysts and blue teams involved in detecting, investigating, and responding to security incidents involving AI-powered applications.
Those designing AI system architectures who want to ensure secure integration of models, data pipelines, and APIs across the lifecycle.
CERTIFICATE
After completing the course, the Swiss Cyber Institute provides the "AI Security Engineer" certificate to all successful participants in an electronic format.
TEAM TRAINING
We design tailored team training for you to strengthen your AI knowledge within the organization. We are here to prepare your team for the future.
All our courses
Subscribe for updates
We share the most important cyber security news. Stay informed for free and stay cyber secure.






