About the Course
PromptArmor brings you the "AI/LLM Security & Risk Course for TPRM". This training course includes 12 modules to teach you the risks that AI in vendors can carry, and how to assess them.
This training course is designed for Third-Party Risk Management teams looking to enhance their understanding of AI security and risk assessment. It consists of two specialized tracks.
Track 1: AI Security for TPRM Teams focuses on core AI security concepts, including large language model (LLM) behavior, embeddings, vector databases, retrieval-augmented generation (RAG), fine-tuning, and indirect prompt injection—key areas where security risks can emerge in AI-driven systems, and where third party risk professionals should look to mitigate risk.
Track 2: Holistically Assessing AI in Third Parties equips TPRM professionals with the skills to evaluate the AI-enabled features that vendors offer. It covers AI application architecture, data usage policies, cybersecurity, data privacy, governance, and model risks, providing a comprehensive framework for assessing AI-related risks in third-parties. Together, these tracks offer a structured approach to understanding and mitigating AI security threats in third parties.
If you have any questions, please contact PromptArmor at support@usepromptarmor.com.
Your Instructor
N/A – On-Demand
PromptArmor helps TPRM teams assess and continuously monitor the risks of AI in vendors. We have extensive experience in finding novel threats in AI-enabled applications, and suggesting concrete actionable recommendations to remediate - both with configuration changes and vendor questions.

