Get in Touch

Course Outline

Session 1 — 09:30 to 10:50 · The AI Security Landscape for Developers
  • What's different about AI security: the gap between traditional AppSec and AI-era threats
  • A quick tour of the AI stack: foundation models, fine-tuning, RAG, agents, and where risk sits in each
  • Why government services face unique exposure: data sensitivity, public accountability, and regulatory context (UK AI Principles, DPA 2018, GDPR)
  • The OWASP Top 10 for LLM Applications (2025) at a glance
  • Interactive Slido poll: "What's the single AI feature your team is most worried about securing?"

Break — 10:50 to 11:10

Session 2 — 11:10 to 12:30 · OWASP Top 10 for LLM Applications (2025) — Part 1
  • LLM01: Prompt Injection — direct and indirect attacks, real-world examples
  • LLM02: Sensitive Information Disclosure — what models leak and why
  • LLM03: Supply Chain — model provenance, third-party plugins, Hugging Face risks
  • LLM04: Data and Model Poisoning — training data integrity basics
  • LLM05: Improper Output Handling — why treating LLM output as user input matters
  • Short live demo: Prompt injection against a sample chatbot

Lunch — 12:30 to 13:20

Session 3 — 13:20 to 14:40 · OWASP Top 10 for LLM Applications (2025) — Part 2+ Hands-On Lab
  • LLM06: Excessive Agency — tool use, permissions, and the principle of least privilege
  • LLM07: System Prompt Leakage — what happens when your system prompt escapes
  • LLM08: Vector and Embedding Weaknesses — RAG-specific pitfalls
  • LLM09: Misinformation and Over-reliance — hallucinations as a security issue
  • LLM10: Unbounded Consumption — denial-of-wallet and resource exhaustion
  • Hands-on lab (~30 minutes): Delegates attack and then harden a small LLM-backed service. Each delegate tests at least one prompt-injection pattern and implements one control. Group debrief at the end.

Break — 14:40 to 15:00

Session 4 — 15:00 to 16:30 · Building Secure AI Services for Government + Close
  • Secure-by-design patterns: input validation, output filtering, guardrails
  • Authentication, authorisation, and session management for AI-exposing endpoints
  • Logging, monitoring, and incident response considerations for AI features
  • Data protection touchpoints: DPIA triggers for AI features, UK GDPR Article 22 in practice
  • Putting it together: a lightweight threat-modelling walkthrough for an AI feature
  • Implementation planning: each delegate drafts their three quick wins for the week ahead
  • Q&A, resources, and next steps

Requirements

Prerequisites
  • Working knowledge of at least one modern programming language (Python, JavaScript/TypeScript, Go, Java, or C#)
  • Basic familiarity with web applications and APIs
  • No prior AI or security background is required — the course is designed as a levelling-up foundation
Audience
  • Software developers and engineers building or integrating AI features
  • Platform engineers and DevOps engineers supporting AI workloads
  • Technical leads and architects responsible for AI-powered services
  • Security champions embedded in engineering teams
  • QA and test engineers testing AI-enabled systems
 7 Hours

Custom Corporate Training

Training solutions designed exclusively for businesses.

  • Customized Content: We adapt the syllabus and practical exercises to the real goals and needs of your project.
  • Flexible Schedule: Dates and times adapted to your team's agenda.
  • Format: Online (live), In-company (at your offices), or Hybrid.
Investment

Price per private group, online live training, starting from 1300 € + VAT*

Contact us for an exact quote and to hear our latest promotions

Provisional Upcoming Courses (Contact Us For More Information)

Related Categories