HDP Developer: Apache Spark 2.3

This Course Includes neXT LIVE 365

LEARN FOR 365 DAYS!


Sunset Learning Institute believes in a 365-day learning experience that begins immediately, regardless of when you attend your ILT course. At SLI, you get a range of learning opportunities, from instructor-led hands-on training, to self-directed, customizable learning paths based on your environment, your needs, and your level of experience. We provide the tools and options, and you decide what you need, when you need it, and how you want to learn it! 


  • Immediate access to supplemental learning assets that are INCLUDED with your purchase of the above instructor-led training course: 365 Days of Access to SLI’s Entire Big Data Video Reference Library (VRL), not just the 5-day class you sign up for (hundreds of searchable, on-demand learning bytes in 5-15-minute videos)
  • 365 Days of Unlimited Access to Delta Sessions - What’s Not Covered in Class! (Version Upgrades, Industry Updates, Etc.)
  • 365 Days of Unlimited 24x7 Access to SLI's Community - Collaborate with SLI Instructors and Other Members (Monitored Daily by SLI Instructors) See Community Demo
  • 365 Days of Unlimited Access to Interactive neXTpertise Sessions and other IT Resources with SLI Instructors (featured hot topics, exam prep, etc.)  See Upcoming neXTpertise Sessions
  • Unlimited Access to Hosted Webinars and All Previously Recorded Sessions
  • Unlimited Access to your Digital Courseware

See Entire Portfolio


  • Benefits:Training that fits your needs (from high intensity to small learning bytes)
  • Build immediate competency - start at time of purchase!
  • Gain know-how and skills gaps with limited work disruptions
  • Get quick answers to daily challenges - live interaction!

Overview

This course introduces the Apache Spark distributed computing engine, and is suitable for developers, data analysts, architects, technical managers, and anyone who needs to use Spark in a hands-on manner. It is based on the Spark 2.x release. The course provides a solid technical introduction to the Spark architecture and how Spark works. It covers the basic building blocks of Spark (e.g. RDDs and the distributed compute engine), as well as higher-level constructs that provide a simpler and more capable interface.It includes in-depth coverage of Spark SQL, DataFrames, and DataSets, which are now the preferred programming API. This includes exploring possible performance issues and strategies for optimization. The course also covers more advanced capabilities such as the use of Spark Streaming to process streaming data, and integrating with the Kafka server.

Target Audience

Software engineers that are looking to develop in-memory applications for time sensitive and highly iterative applications in an Enterprise HDP environment.

Prerequisites

Students should be familiar with programming principles and have previous experience in software development using Scala. Previous experience with data streaming, SQL, and HDP is also helpful, but not required.

Course Outline

DAY 1: Scala Ramp Up, Introduction to Spark

OBJECTIVES

  • Scala Introduction
  • Working with: Variables, Data Types, and Control Flow
  • The Scala Interpreter
  • Collections and their Standard Methods (e.g. map())
  • Working with: Functions, Methods, and Function Literals
  • Define the Following as they Relate to Scale: Class, Object, and Case Class
  • Overview, Motivations, Spark Systems
  • Spark Ecosystem
  • Spark vs. Hadoop
  • Acquiring and Installing Spark
  • The Spark Shell, SparkContext

LABS

  • Setting Up the Lab Environment
  • Starting the Scala Interpreter
  • A First Look at Spark
  • A First Look at the Spark Shell

 

DAY 2: RDDs and Spark Architecture, Spark SQL, DataFrames and DataSets

OBJECTIVES

  • RDD Concepts, Lifecycle, Lazy Evaluation
  • RDD Partitioning and Transformations
  • Working with RDDs Including: Creating and Transforming
  • An Overview of RDDs
  • SparkSession, Loading/Saving Data, Data Formats
  • Introducing DataFrames and DataSets
  • Identify Supported Data Formats
  • Working with the DataFrame (untyped) Query DSL
  • SQL-based Queries
  • Working with the DataSet (typed) API
  • Mapping and Splitting
  • DataSets vs. DataFrames vs. RDDs

LABS

  • RDD Basics
  • Operations on Multiple RDDs
  • Data Formats
  • Spark SQL Basics
  • DataFrame Transformations
  • The DataSet Typed API
  • Splitting Up Data

 

DAY 3: Shuffling, Transformations and Performance, Performance Tuning

OBJECTIVES

  • Working with: Grouping, Reducing, Joining
  • Shuffling, Narrow vs. Wide Dependencies, and Performance Implications
  • Exploring the Catalyst Query Optimizer
  • The Tungsten Optimizer
  • Discuss Caching, Including: Concepts, Storage Type, Guidelines
  • Minimizing Shuffling for Increased Performance
  • Using Broadcast Variables and Accumulators
  • General Performance Guidelines

LABS

  • Exploring Group Shuffling
  • Seeing Catalyst at Work
  • Seeing Tungsten at Work
  • Working with Caching, Joins, Shuffles, Broadcasts, Accumulators
  • Broadcast General Guidelines

 

DAY 4: Creating Standalone Applications and Spark Streaming

OBJECTIVES

  • Core API, SparkSession.Builder
  • Configuring and Creating a SparkSession
  • Building and Running Applications
  • Application Lifecycle (Driver, Executors, and Tasks)
  • Cluster Managers (Standalone, YARN, Mesos)
  • Logging and Debugging
  • Introduction and Streaming Basics
  • Spark Streaming (Spark 1.0+)
  • Structured Streaming (Spark 2+)
  • Consuming Kafka Data

LABS

  • Spark Job Submission
  • Additional Spark Capabilities
  • Spark Streaming
  • Spark Structured Streaming
  • Spark Structured Streaming with Kafka

SLI Main Menu