Menu

INSTRUCTOR-LED COURSE

Cloudera Data Scientist Training (DATA-SCI-TRAIN)

Course Information

Duration: 4 days

Version: DATA-SCI-TRAIN

Price: $3,195.00

Certification:

Exam:

Learning Credits:

ALL DATES GUARANTEED

Check out our full list of training locations and learning formats. Please note that the location you choose may be an Established HD-ILT location with a virtual live instructor.

COURSE DELIVERY OPTIONS

  • Live Classroom

Train face-to-face with the live instructor.

  • Established HD-ILT Location

Interact with a live, remote instructor from a specialized, HD-equipped classroom near you.​

  • Virtual Remote

Attend the live class from the comfort of your home or office.

Register

OVERVIEW

This four-day workshop covers data science and machine learning workflows at scale using Apache Spark 2 and other key components of the Hadoop ecosystem. The workshop emphasizes the use of data science and machine learning methods to address real-world business challenges. Using scenarios and datasets from a fictional technology company, students discover insights to support critical business decisions and develop data products to transform the business. The material is presented through a sequence of brief lectures, interactive demonstrations, extensive hands-on exercises, and discussions. The Apache Spark demonstrations and exercises are conducted in Python (with PySpark) and R (with sparklyr) using the Cloudera Data Science Workbench (CDSW) environment. The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models to large datasets on distributed clusters. Data engineers and developers with some knowledge of data science and machine learning may also find this workshop useful.

Prerequisites:

Workshop participants should have a basic understanding of Python or R and some experience exploring and analyzing data and developing statistical or machine learning models. Knowledge of Hadoop or Spark is not required.

 

Target Audience:

The workshop is designed for data scientists who currently use Python or R to work with smaller datasets on a single machine and who need to scale up their analyses and machine learning models to large datasets on distributed clusters. Data engineers and developers with some knowledge of data science and machine learning may also find this workshop useful.

 

Course Objectives:

  • Overview of data science and machine learning at scale
  • Overview of the Hadoop ecosystem
  • Working with HDFS data and Hive tables using Hue
  • Introduction to Cloudera Data Science Workbench
  • Overview of Apache Spark 2
  • Reading and writing data
  • Inspecting data quality
  • Cleansing and transforming data
  • Summarizing and grouping data
  • Combining, splitting, and reshaping data
  • Exploring data
  • Configuring, monitoring, and troubleshooting Spark applications
  • Overview of machine learning in Spark MLlib
  • Extracting, transforming, and selecting features
  • Building and evaluating regression models
  • Building and evaluating classification models
  • Building and evaluating clustering models
  • Cross-validating models and tuning hyperparameters
  • Building machine learning pipelines
  • Deploying machine learning models
  • Spark, Spark SQL, and Spark MLlib
  • PySpark and sparklyr
  • Cloudera Data Science Workbench (CDSW)
  • Hue

 

Course Outine:

Overview of CDSW

  • Introduction to CDSW
  • Who Can Use CDSW
  • How to Access CDSW
  • Navigating around CDSW
  • User Settings
  • Hadoop Authentication

Projects in CDSW

  • Creating a New Project
  • Navigating around a Project
  • Project Settings

The CDSW Workbench Interface

  • Using the Workbench
  • Using the Sidebar
  • Using the Code Editor
  • Engines and Sessions
  • Running Python and R Code in CDSW
  • Running Code
  • Using the Session Prompt
  • Using the Terminal
  • Installing Packages
  • Using Markdown in Comments

Using Apache Spark 2 in CDSW

  • Scenario and Dataset
  • Copying Files to HDFS
  • Interfaces to Apache Spark 2
  • Connecting to Spark
  • Reading Data
  • Inspecting Data

Data Science and Machine Learning in CDSW

  • Transforming Data
  • Using SQL Queries
  • Visualizing Data from Spark
  • Machine Learning with MLlib
  • Session History

Experiments and Models in CDSW

  • Machine Learning Workflow
  • Running Experiments
  • Using Packages in Experiments
  • Deploying Models
  • Calling Models
  • Using Packages in Models

Teams and Collaboration in CDSW

  • Collaboration in CDSW
  • Teams in CDSW
  • Using Git for Collaboration
  • Conclusion

 

 

SLI Main Menu