CompTIA SecAI+

Price: $2,795.00
Duration: 5 days
Certification: CompTIA SecAI+
Exam: CY0-001
Continuing Education Credits:
Learning Credits:

CompTIA SecAI+ enables a safer digital future by empowering IT and cybersecurity talent worldwide to meet the emerging challenges and opportunities at the intersection of AI and security. 


CompTIA SecAI+ is the global IT industry’s first comprehensive “expansion” certification focused on the security of artificial intelligence systems and the secure application of AI in cybersecurity operations. This certification equips professionals with critical, vendor-neutral skills to understand, defend, and ethically deploy AI technologies within any organization. 


Upcoming Class Dates and Times

All Sunset Learning courses are guaranteed to run

Course Outline and Details

  • This is equivalent to 3–4 years of IT experience with approximately 2 years of hands-on cybersecurity experience.


  • Individuals seeking CompTIA SecAI+ certification (CY0-001) exam.
  • Learners will develop practical, job-ready skills for recognizing and responding to AI-driven threats, such as prompt injection, model abuse, data leakage, and adversarial attacks. They will be able to apply secure-by-design principles to AI use cases, policies, and workflows. Upon completion, learners will be prepared to contribute to risk management, governance, and incident response activities involving AI. 
  • SecAI+ also provides a verifiable way to demonstrate your skills to employers, helping you stand out in a competitive market. They signal a commitment to ongoing professional development and can support career advancement, role expansion, or salary growth over time.

Basic AI Concepts Related to Cybersecurity 

Compare and contrast various AI types and techniques used in cybersecurity. 

  • Types of AI 
    • Generative AI 
    • Machine learning 
    • Statistical learning 
    • Transformers 
    • Deep learning 
    • Natural language processing (NLP) 
      • Large language models (LLMs) 
      • Small language models (SLMs) 
      • Generative adversarial networks (GANs) 
  • Model training techniques 
    • Model validation 
    • Supervised learning 
    • Unsupervised learning 
    • Reinforcement learning 
    • Fine-tuning 
      • Epoch 
      • Pruning 
      • Quantization 
  • Prompt engineering 
    • System prompts 
    • User prompts 
    • One-shot prompting 
    • Multi-shot prompting 
    • Zero-shot prompting 
    • System roles 
    • Templates 

Explain the importance of data security in relation to AI. 

  • Data processing 
    • Data cleansing 
    • Data verification 
    • Data lineage 
    • Data integrity 
    • Data provenance 
    • Data augmentation 
    • Data balancing 
  • Data types 
    • Structured data 
    • Semi-structured data 
    • Unstructured data 
  • Watermarking 
  • Retrieval-augmented generation (RAG) 
    • Vector storage 
    • Embeddings 
  • Business use case 
    • Alignment with corporate objectives 
  • Data collection 
    • Trustworthiness 
    • Authenticity 
  • Data preparation 
  • Model development/selection 
  • Model evaluation 
  • Deployment 
  • Validation 
  • Monitoring and maintenance 
  • Feedback and iteration 
  • Human-centric AI design principles 
    • Human-in-the-loop 
    • Human oversight 
    • Human validation

Securing AI Systems 

Given a scenario, use AI threat-modeling resources. 

  • Open Worldwide Application Security Project (OWASP) Top 10 
    • LLM Top 10 
    • Machine Learning (ML) Security Top 10 
  • Massachusetts Institute of Technology (MIT) AI Risk Repository 
  • MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) 
  • Common Vulnerabilities and Exposures (CVE) AI Working Group 
  • Threat-modeling frameworks 

Given a set of requirements, implement security controls for AI systems. 

  • Model controls 
    • Model evaluation 
    • Model guardrails 
      • Prompt templates 
  • Gateway controls 
    • Prompt firewalls 
    • Rate limits 
    • Token limits 
    • Input quotas 
      • Data size 
      • Quantity 
    • Modality limits 
    • Endpoint access controls 
  • Guardrail testing and validation 

Given a scenario, implement appropriate access controls for AI systems. 

  • Model access 
  • Data access 
  • Agent access 
  • Network/application programming interface (API) access 

Given a scenario, implement data security controls for AI systems. 

  • Encryption requirements 
    • In transit 
    • At rest 
    • In use 
  • Data safety 
    • Data anonymization 
    • Data classification labels 
    • Data redaction 
    • Data masking 
    • Data minimization

Given a scenario, implement monitoring and auditing for AI systems. 

  • Prompt monitoring 
    • Query 
    • Response 
  • Log monitoring 
  • Log sanitization 
  • Log protection 
  • Response confidence level 
  • Rate monitoring 
  • AI cost monitoring 
    • Prompts 
    • Storage 
    • Response 
    • Processing 
  • Auditing for quality and compliance 
    • Hallucinations 
    • Accuracy 
    • Bias and fairness 
    • Access 

Given a scenario, analyze the evidence of an attack and suggest compensating controls for AI systems. 

  • Attacks 
    • Prompt injection 
    • Poisoning 
      • Model poisoning 
      • Data poisoning 
    • Jailbreaking 
    • Hallucinations 
    • Input manipulation 
    • Introducing biases 
    • Circumventing AI guardrails 
    • Manipulating application integrations 
    • Model inversion 
    • Model theft 
    • AI supply chain attacks 
    • Transfer learning attacks 
    • Model skewing 
    • Output integrity attacks 
    • Membership inference 
    • Insecure output handling 
    • Model denial of service (DoS) 
    • Sensitive information disclosure 
    • Insecure plug-in design 
    • Excessive agency 
    • Overreliance 
  • Compensating controls 
    • Prompt firewalls 
    • Model guardrails 
    • Access controls 
    • Data integrity controls 
    • Encryption 
    • Prompt templates 
    • Rate limiting 
    • Least privilege

AI-assisted Security 

Given a scenario, use AI-enabled tools to facilitate security tasks. 

    • Tools/applications 
    • Integrated development environment (IDE) plug-ins 
    • Browser plug-ins 
    • Command-line interface (CLI) plug-ins 
    • Chatbots 
    • Personal assistants 
    • Model Context Protocol (MCP) server 
  • Use cases 
    • Signature matching 
    • Code quality and linting 
    • Vulnerability analysis 
    • Automated penetration testing 
    • Anomaly detection 
    • Pattern recognition 
    • Incident management 
    • Threat modeling 
    • Fraud detection 
    • Translation 
    • Summarization 

Explain how AI enables or enhances attack vectors. 

  • AI-generated content (deepfake) 
    • Impersonation 
    • Misinformation 
    • Disinformation 
  • Adversarial networks 
  • Reconnaissance 
  • Social engineering 
  • Obfuscation 
  • Automated data correlation 
  • Automated attack generation 
    • Attack vector discovery 
    • Payloads 
    • Malware 
    • Honeypot 
    • Distributed denial of service (DDoS)

Given a scenario, use AI to automate security tasks.  

  • Scripting tools 
    • Low-code 
    • No-code 
  • Document synthesis and summarization 
  • Incident response ticket management 
  • Change management 
    • AI-assisted approvals 
    • Automated deployment/rollback  
  • AI agents 
  • Continuous integration and continuous deployment (CI/CD) 
    • Code scanning 
    • Software composition analysis 
    • Unit testing 
    • Regression testing 
    • Model testing 
    • Automated deployment/rollback

AI Governance, Risk, and Compliance 

Explain organizational governance structures that support AI. 

  • Organizational structures 
    • AI Center of Excellence 
    • AI policies and procedures 
  • AI-related roles 
    • Data scientist 
    • AI architect 
    • Machine learning engineer 
    • Platform engineer 
    • MLOps engineer 
    • AI security architect 
    • AI governance engineer 
    • AI risk analyst 
    • AI auditor 
    • Data engineer 

Explain risks associated with AI. 

  • Responsible AI 
    • Fairness 
    • Reliability and safety 
    • Transparency 
    • Privacy and security 
    • Explainability 
    • Inclusiveness 
    • Accountability 
    • Consistency 
    • Awareness training 
  • Risks 
    • Introduction of bias 
    • Accidental data leakage 
    • Reputational loss 
    • Accuracy and performance of the model 
    • Intellectual Property (IP)-related risks 
    • Autonomous systems 
  • Shadow IT 
    • Shadow AI 

Summarize the impact of compliance on business use and development of AI. 

  • European Union (EU) AI Act 
  • Organisation for Economic Co-operation and Development (OECD) standards 
  • ISO AI standards 
  • National Institute of Standards and Technology (NIST AI Risk Management (AIRMF) 
  • Corporate policies 
    • Sanctioned vs. unsanctioned 
    • Private vs. public models 
    • Sensitive data governance 
  • Third-party compliance evaluations 
  • Data sovereignty

Course Delivery Options

Train face-to-face with the live instructor. (Please note, not all classes will have this option)
Access to on-demand training content anytime, anywhere. (Please note, not all classes will have this option)
Attend the live class from the comfort of your home or office.
Interact with a live, remote instructor from a specialized, HD-equipped classroom near you. An SLI sales rep will confirm location availability prior to registration confirmation.