HDP Developer Quick Start


Check out our full list of training locations and learning formats. Please note that the location you choose may be an Established HD-ILT location.

What's Included With This Class?​

365 Day neXT Learning Membership

Video Reference Library

Online Discussion Forums

Tech Talk Webinars

Goal-Based Learning Paths

Your neXT membership includes…

  • A 365 Day neXT Learning Membership is included with the class, giving you access to the below resources. Join thousands of other neXT members in your learning journey!


  • Video Reference Library: Thousands of recorded topics, many of which relate to the official technology curriculum, broken down into short, consumable videos. These videos are all on-demand and searchable by subject or course name. Get access to content and recordings from the entire technology stack, not just this class!


  • Online Discussion Forums: Technical discussion boards are available for you to interact with SLI instructors, SME’s, and other neXT Learning members. You can leave questions and expect to see quick responses as discussion boards are monitored daily.


  • Tech Talk Webinars: SLI hosts a series of technical webinars quarterly. These are virtual, interactive sessions for customers, instructors & SME’s to engage on a variety of topics, driven by our members. Sessions are recorded and archived for future viewing. Session Types: Delta & New Featured Topics, Open Q&A Workshops, Exam Prep & Guidance, Lab Demos. We are always open to new ideas and topics!


  • Goal-based Learning Paths: Learning paths are available for members who have a specific end goal in sight. SLI instructors have developed these paths which may contain videos, blogs, articles, or quizzes, combined to help learners meet specific objectives. Example learning paths: CCNA Exam Prep, Scripting for Beginners

Learn More About Our Annual neXT Learning Memberships


This 4 day training course is designed for developers who need to create applications to analyze Big Data stored in Apache Hadoop using Apache Pig and Apache Hive, and developing applications on Apache Spark.

Topics include: Essential understanding of HDP and its capabilities, Hadoop, YARN, HDFS, MapReduce/Tez, data ingestion, using Pig and Hive to perform data analytics on Big Data and an introduction to Spark Core, Spark SQL, Apache Zeppelin, and additional Spark features.

Target Audience

Developers and data engineers who need to understand and develop applications on HDP.​


Students should be familiar with programming principles and have experience in software development. SQL and light scripting knowledge is also helpful. No prior Hadoop knowledge is required.  Target Audience Developers and data engineers who need to understand and develop applications on HDP.

Course Objectives

Full Course Outline

Day 1: An Introduction to Apache Hadoop and HDFS
  • Describe the Case for Hadoop
  • Describe the Trends of Volume, Velocity and Variety
  • Discuss the Importance of Open Enterprise Hadoop
  • Describe the Hadoop Ecosystem Frameworks Across the Following Five Architectural Categories:
    • Data Management
    • Data Access
    • Data Governance & Integration
    • Security
    • Operations
  • Describe the Function and Purpose of the Hadoop Distributed File System (HDFS)
  • List the Major Architectural Components of HDFS and their Interactions
  • Describe Data Ingestion
  • Describe Batch/Bulk Ingestion Options
  • Describe the Streaming Framework Alternatives
  • Describe the Purpose and Function of MapReduce
  • Describe the Purpose and Components of YARN
  • Describe the Major Architectural Components of YARN and their Interactions
  • Define the Purpose and Function of Apache Pig
  • Work with the Grunt Shell
  • Work with Pig Latin Relation Names and Field Names
  • Describe the Pig Data Types and Schema

Day 1 Labs and Demonstrations
  • Starting an HDP Cluster
  • Using HDFS Commands
  • Demonstration: Understanding Apache Pig
  • Getting Started with Apache Pig
  • Exploring Data with Pig

Day 2: Advanced Apache Pig Programming
  • Demonstrate Common Operators Such as:
    • Order by
    • Case
    • Distinct
    • Parallel
    • Foreach
  • Understand how Hive Tables are Defined and Implemented
  • Use Hive to Explore and Analyze Data Sets
  • Explain and Use the Various Hive File Formats
  • Create and Populate a Hive Table that Uses ORC File Formats
  • Use Hive to Run SQL-like Queries to Perform Data Analysis
  • Use Hive to Join Datasets Using a Variety of Techniques
  • Write Efficient Hive Queries
  • Explain the Uses and Purpose of HCatalog
  • Use HCatalog with Pig and Hive

Day 2 Labs and Demonstrations
  • Splitting a Dataset
  • Joining Datasets
  • Preparing Data for Apache Hive
  • Understanding Apache Hive Tables
  • Demonstration: Understanding Partitions and Skew
  • Analyzing Big Data with Apache Hive
  • Demonstration: Computing Ngrams
  • Joining Datasets in Apache Hive
  • Computing Ngrams of Emails in Avro Format
  • Using HCatalog with Apache Pig

Day 3: Advanced Apache Pig Programming
  • Describe How to Perform a Multi-Table/File Insert
  • Define and Use Views
  • Define and Use Clauses and Windows
  • List the Hive File Formats Including:
    • Text Files
    • SequenceFile
    • RCFile
    • ORC File
  • Define Hive Optimization
  • Use Apache Zeppelin to Work with Spark
  • Describe the Purpose and Benefits of Spark
  • Define Spark REPLs and Application Architecture
  • Explain the Purpose and Function of RDDs
  • Explain Spark Programming Basics
  • Define and Use Basic Spark Transformations
  • Define and Use Basic Spark Actions
  • Invoke Functions for Multiple RDDs, Create Named Functions and Use Numeric Operations

Day 3 Labs
  • Advanced Apache Hive Programming
  • Introduction to Apache Spark REPLs and Apache Zeppelin
  • Creating and Manipulating RDDs
  • Creating and Manipulating Pair RDDs

Day 4: Working with Pair RDDS and Building Yarn Applications
  • Define and Create Pair RDDs
  • Perform Common Operations on Pair RDDs
  • Name the Various Components of Spark SQL and Explain their Purpose
  • Describe the Relationship Between DataFrames, Tables and Contexts
  • Use Various Methods to Create and Save DataFrames and Tables
  • Understand Caching, Persisting and the Different Storage Levels
  • Describe and Implement Checkpointing
  • Create an Application to Submit to the Cluster
  • Describe Client vs Cluster Submission with YARN
  • Submit an Application to the Cluster
  • List and Set Important Configuration Items

Day 4 Labs
  • Creating and Saving DateFrames and Tables
  • Working with DataFrames
  • Building and Submitting Applications to YARN

Exclusive Video Included With This Course:​
How to Load Ambari from Scratch
Exclusive Video Included With This Course:​
Configuring Local Repositories
Exclusive Video Included With This Course:​
HDPCD - Big Data Certified Developer Exam Prep
Exclusive Video Included With This Course:​
HDPCA - Big Data Certified Administrator Exam Prep
Exclusive Video Included With This Course:​
Free Open Source Components to Solve Big/”ANY” Data Problems
Exclusive Video Included With This Course:​
Deep Dive: Kafka
SLI Main Menu