- COURSE
Building with Open-Source Generative AI (AIARC-101)
Price: $2,595.00
Duration: 3 days
Certification:
Exam:
Continuing Education Credits:
Learning Credits:
Learn how to write practical AI applications via hands-on labs. You will design and develop Transformer models, ensuring data security in your work. The course covers AI transformer architectures, Python programming, hardware requirements, training techniques, and AI tasks like classification and regression. It includes hands-on exercises with open-source LLM frameworks, advanced topics like fine-tuning and quantization, and offers AI certification from Alta3 Research. Ideal for Python Developers, DevSecOps Engineers, and Managers or Directors, the course requires basic Python skills and provides access to a GPU-accelerated server for practical experience..
Upcoming Class Dates and Times
All Sunset Learning courses are guaranteed to run
- Please Contact Us to request a class date or speak with someone about scheduling options.
Course Outline and Details
Prerequisites
- Python – PCEP Certification or Equivalent Experience
- Familiarity with Linux
Target Audience
- Project Managers
- Architects
- Developers
- Data Acquisition Specialist
Course Objectives
- Train and optimize Transformer models with PyTorch.
- Master advanced prompt engineering.
- Understand AI architecture, especially Transformers.
- Write a real-world AI web application.
- Describe tokenization and word embeddings.
- Install and use frameworks like Llama-2.
- Apply strategies to maximize model performance.
- Explore model quantization and fine-tuning.
- Compare CPU vs. GPU hardware acceleration.
- Understand chat vs. instruct interaction modes.
Course Outline
Learning Your Environment
- Using Vim
- Tmux
- VScode Integration
- Revision Control with GitHub
Deep Learning Intro
- What is Intelligence?
- Generative AI Unveiled
- The Transformer Model
- Feed Forward Neural Networks
- Tokenization
- Word embeddings
- Positional Encoding
Build a Transformer Model from Scratch
- PyTorch
- Construct a Tensor from a Dataset
- Orchestrate Tensors in Blocks and Batches
- Initialize PyTorch Generator Function
- Train the Transformer Model
- Apply Positional Encoding and Self-Attention
- Attach the Feed Forward Neural Network
- Build the Decoder Block
- Transformer Model as Code
Prompt Engineering
- Introduction to Prompt Engineering
- Getting Started with Gemini
- Developing Basic Prompts
- Intermediate Prompts: Define Task/Inputs/Outputs/Constraints/Style
- Advanced Prompts: Chaining, Set Role, Feedback, Examples
Hardware requirements
- GPUs role in AI performance (CPU vs GPU)
- Current GPUs and cost vs value
- Tensorcore vs older GPU architectures
Pre-trained LLM
- A History of Neural Network Architectures
- Introduction to the LLaMa.cpp Interface
- Preparing A100 for Server Operations
- Operate LLaMa2 Models with LLaMa.cpp
- Selecting Quantization Level to Meet Performance and Perplexity Requirements
- Running the llama.cpp Package
- Llama interactive mode
- Persistent Context with Llama
- Constraining Output with Grammars
- Deploy Llama API Server
- Develop LLaMa Client Application
- Write a Real-World AI Application using the Llama API
Fine Tuning
- Using PyTorch to fine tune models
- Advanced Prompt Engineering Techniques
Testing and Pushing Limits
- Maximizing Model Limits
- Curriculum Path: GenerativeAI