Fine-Tuning Large Language Models (FT-LLM)

Price: $2,495.00
Duration: 3 days
Certification: 
Exam: 
Continuing Education Credits:
Learning Credits:

You will develop the skills to gather, clean, and organize data for fine-tuning pre-trained LLMs and Generative AI models. Through a combination of lectures and hands-on labs, you will use Python to fine-tune open-source Transformer models. Gain practical experience with LLM frameworks, learn essential training techniques, and explore advanced topics such as quantization. During the hands-on labs, you will access a GPU-accelerated server for practical experience with industry-standard tools and frameworks.

Upcoming Class Dates and Times

All Sunset Learning courses are guaranteed to run

Course Outline and Details

  • Python – PCEP Certification or Equivalent Experience
  • Familiarity with Linux
  • Project Managers
  • Architects
  • Developers
  • Data Acquisition Specialists


  • Clean and Curate Data for AI Fine-Tuning
  • Establish guidelines for obtaining RAW Data
  • Go from Drowning in Data to Clean Data
  • Fine-Tune AI Models with PyTorch
  • Understand AI architecture: Transformer model
  • Describe tokenization and word embeddings
  • Install and use AI frameworks like Llama-3
  • Perform LoRA and QLoRA Fine-Tuning
  • Explore model quantization and fine-tuning
  • Deploy and Maximize AI Model Performance

Learning Your Environment

  • Using Vim
  • Tmux
  • VScode Integration
  • Revision Control with GitHub

Data Curation for AI

  • Curating Data for AI
  • Gathering Raw Data
  • Data Cleaning and Preparation
  • Data Labeling
  • Data Organization
  • Premade Datasets for Fine Tuning
  • Obtain and Prepare Premade Datasets

Deep Learning

  • What is Intelligence?
  • Generative AI
  • The Transformer Model
  • Feed Forward Neural Networks
  • Tokenization
  • Word Embeddings
  • Positional Encoding

Pre-trained LLM

  • A History of Neural Network Architectures
  • Introduction to the LLaMa.cpp Interface
  • Preparing A100 for Server Operations
  • Operate LLaMa3 Models with LLaMa.cpp
  • Selecting Quantization Level to Meet Performance and Perplexity Requirements

Fine Tuning

  • Fine-Tuning a Pre-Trained LLM
  • PyTorch
  • Basic Fine Tuning with PyTorch
  • LoRA Fine-Tuning LLaMa3 8B
  • QLoRA Fine-Tuning LLaMa3 8B

Operating Fine-Tuned Model

  • Running the llama.cpp Package
  • Deploy Llama API Server
  • Develop LLaMa Client Application
  • Write a Real-World AI Application using the Llama API

Course Delivery Options

Train face-to-face with the live instructor. (Please note, not all classes will have this option)
Access to on-demand training content anytime, anywhere. (Please note, not all classes will have this option)
Attend the live class from the comfort of your home or office.
Interact with a live, remote instructor from a specialized, HD-equipped classroom near you. An SLI sales rep will confirm location availability prior to registration confirmation.
Course Giveaway
CIsco Security Digital Course

Sign up anytime in this month for a chance to win a FREE Cisco Security Digital course of your choice! Winner will be announced November 3rd and will get 6-months access to digital course of choice!

Course Giveaway
CIsco Security Digital Course

Sign up anytime in this month for a chance to win a FREE Cisco Security Digital course of your choice! Winner will be announced November 3rd and will get 6-months access to digital course of choice!