header-logo.png
hd14

Overview

Course Description

It is a comprehensive Hadoop Big Data training course designed by industry experts considering current industry job requirements to help you learn Big Data Hadoop and Spark modules. This is an industry-recognized Big Data Hadoop certification training course that is a combination of the training courses in Hadoop developer, Hadoop administrator, Hadoop testing and analytics with Apache Spark. This Cloudera Hadoop and Spark training will prepare you to clear Cloudera CCA175 Big Data certification.

Course Content

Module 01 - Hadoop Installation and Setup

1.1 The architecture of Hadoop cluster
1.2 What is High Availability and Federation?
1.3 How to setup a production cluster?
1.4 Various shell commands in Hadoop
1.5 Understanding configuration files in Hadoop
1.6 Installing a single node cluster with Cloudera Manager
1.7 Understanding Spark, Scala, Sqoop, Pig, and Flume

Module 02 - Introduction to Big Data Hadoop and Understanding HDFS and MapReduce

2.1 Introducing Big Data and Hadoop
2.2 What is Big Data and where does Hadoop fit in?
2.3 Two important Hadoop ecosystem components, namely, MapReduce and HDFS
2.4 In-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability and in-depth YARN – resource manager and node manager

Module 03 - Deep Dive in MapReduce

3.1 Learning the working mechanism of MapReduce
3.2 Understanding the mapping and reducing stages in MR
3.3 Various terminologies in MR like Input Format, Output Format, Partitioners, Combiners, Shuffle, and Sort

Module 04 - Introduction to Hive

4.1 Introducing Hadoop Hive
4.2 Detailed architecture of Hive
4.3 Comparing Hive with Pig and RDBMS
4.4 Working with Hive Query Language
4.5 Creation of a database, table, group by and other clauses
4.6 Various types of Hive tables, HCatalog
4.7 Storing the Hive Results, Hive partitioning, and Buckets

Module 05 - Advanced Hive and Impala

5.1 Indexing in Hive
5.2 The ap Side Join in Hive
5.3 Working with complex data types
5.4 The Hive user-defined functions
5.5 Introduction to Impala
5.6 Comparing Hive with Impala
5.7 The detailed architecture of Impala

Module 06 - Introduction to Pig

6.1 Apache Pig introduction and its various features
6.2 Various data types and schema in Hive
6.3 The available functions in Pig, Hive Bags, Tuples, and Fields

Module 07 - Flume, Sqoop and HBase

7.1 Apache Sqoop introduction
7.2 Importing and exporting data
7.3 Performance improvement with Sqoop
7.4 Sqoop limitations
7.5 Introduction to Flume and understanding the architecture of Flume
7.6 What is HBase and the CAP theorem?

Module 08 - Writing Spark Applications Using Scala

8.1 Using Scala for writing Apache Spark applications
8.2 Detailed study of Scala
8.3 The need for Scala
8.4 The concept of object-oriented programming
8.5 Executing the Scala code
8.6 Various classes in Scala like getters, setters, constructors, abstract, extending objects, overriding methods
8.7 The Java and Scala interoperability
8.8 The concept of functional programming and anonymous functions
8.9 Bobsrockets package and comparing the mutable and immutable collections
8.10 Scala REPL, Lazy Values, Control Structures in Scala, Directed Acyclic Graph (DAG), first Spark application using SBT/Eclipse, Spark Web UI, Spark in Hadoop ecosystem.

Module 09 - Spark framework

9.1 Detailed Apache Spark and its various features
9.2 Comparing with Hadoop
9.3 Various Spark components
9.4 Combining HDFS with Spark and Scalding
9.5 Introduction to Scala
9.6 Importance of Scala and RDD

Module 10 - RDD in Spark

10.1 Understanding the Spark RDD operations
10.2 Comparison of Spark with MapReduce
10.3 What is a Spark transformation?
10.4 Loading data in Spark
10.5 Types of RDD operations viz. transformation and action
10.6 What is a Key/Value pair?

Module 11 - Data Frames and Spark SQL

11.1 The detailed Spark SQL
11.2 The significance of SQL in Spark for working with structured data processing
11.3 Spark SQL JSON support
11.4 Working with XML data and parquet files
11.5 Creating Hive Context
11.6 Writing Data Frame to Hive
11.7 How to read a JDBC file?
11.8 Significance of a Spark data frame
11.9 How to create a data frame?
11.10 What is schema manual inferring?
11.11 Work with CSV files, JDBC table reading, data conversion from Data Frame to JDBC, Spark SQL user-defined functions, shared variable, and accumulators
11.12 How to query and transform data in Data Frames?
11.13 How data frame provides the benefits of both Spark RDD and Spark SQL?
11.14 Deploying Hive on Spark as the execution engine

Module 12 - Machine Learning Using Spark (MLlib)

12.1 Introduction to Spark MLlib
12.2 Understanding various algorithms
12.3 What is Spark iterative algorithm?
12.4 Spark graph processing analysis
12.5 Introducing Machine Learning
12.6 K-Means clustering
12.7 Spark variables like shared and broadcast variables
12.8 What are accumulators?
12.9 Various ML algorithms supported by MLlib
12.10 Linear regression, logistic regression, decision tree, random forest, and K-means clustering techniques

Module 13 - Integrating Apache Flume and Apache Kafka

13.1 Why Kafka?
13.2 What is Kafka?
13.3 Kafka architecture
13.4 Kafka workflow
13.5 Configuring Kafka cluster
13.6 Basic operations
13.7 Kafka monitoring tools
13.8 Integrating Apache Flume and Apache Kafka

Module 14 - Spark Streaming

14.1 Introduction to Spark streaming
14.2 The architecture of Spark streaming
14.3 Working with the Spark streaming program
14.4 Processing data using Spark streaming
14.5 Requesting count and DStream
14.6 Multi-batch and sliding window operations
14.7 Working with advanced data sources
14.8 Features of Spark streaming
14.9 Spark Streaming workflow
14.10 Initializing StreamingContext
14.11 Discretized Streams (DStreams)
14.12 Input DStreams and Receivers
14.13 Transformations on DStreams
14.14 Output Operations on DStreams
14.15 Windowed operators and its uses
14.16 Important Windowed operators and Stateful operators

Module 15 - Hadoop Administration – Multi-node Cluster Setup Using Amazon EC2

15.1 Create a 4-node Hadoop cluster setup
15.2 Running the MapReduce Jobs on the Hadoop cluster
15.3 Successfully running the MapReduce code
15.4 Working with the Cloudera Manager setup

Module 16 - Hadoop Administration – Cluster Configuration

16.1 Overview of Hadoop configuration
16.2 The importance of Hadoop configuration file
16.3 The various parameters and values of configuration
16.4 The HDFS parameters and MapReduce parameters
16.5 Setting up the Hadoop environment
16.6 The Include and Exclude configuration files
16.7 The administration and maintenance of name node, data node directory structures, and files
16.8 What is a File system image?
16.9 Understanding Edit log

Module 17 - Hadoop Administration – Maintenance, Monitoring and Troubleshooting

17.1 Introduction to the checkpoint procedure, name node failure
17.2 How to ensure the recovery procedure, Safe Mode, Metadata and Data backup, various potential problems and solutions, what to look for and how to add and remove nodes

Module 18 - ETL Connectivity with Hadoop Ecosystem 

18.1 How ETL tools work in Big Data industry?
18.2 Introduction to ETL and data warehousing
18.3 Working with prominent use cases of Big Data in ETL industry
18.4 End-to-end ETL PoC showing Big Data integration with ETL tool

Module 19 - Project Solution Discussion and Cloudera Certification Tips and Tricks

19.1 Working towards the solution of the Hadoop project solution
19.2 Its problem statements and the possible solution outcomes
19.3 Preparing for the Cloudera certifications
19.4 Points to focus on scoring the highest marks
19.5 Tips for cracking Hadoop interview questions

Module 20 - Hadoop Application Testing

20.1 Importance of testing
20.2 Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end-to-end tests, Functional testing, Release certification testing, Security testing, Scalability testing, Commissioning and Decommissioning of data nodes testing, Reliability testing, and Release testing

Module 21 - Roles and Responsibilities of Hadoop Testing Professional

21.1 Understanding the Requirement
21.2 Preparation of the Testing Estimation
21.3 Test Cases, Test Data, Test Bed Creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, Hive and HBase) while loading the input (logs, files, records, etc.) using Sqoop/Flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges, etc.), reporting defects to the development team or manager and driving them to closure
21.4 Consolidating all the defects and create defect reports
21.5 Validating new feature and issues in Core Hadoop

Module 22 - Framework Called MRUnit for Testing of MapReduce Programs

22.1 Report defects to the development team or manager and driving them to closure
22.2 Consolidate all the defects and create defect reports
22.3 Responsible for creating a testing framework called MRUnit for testing of MapReduce programs

Module 23 - Unit Testing

23.1 Automation testing using the OOZIE
23.2 Data validation using the query surge tool

Module 24 - Test Execution

 24.1 Test plan for HDFS upgrade
24.2 Test automation and result

 

Student feedback

10 Reviews

  • 8
  • 1
  • 0
  • 0
  • 0

5

out of 5

Course Rating

review1.png

Pawan Sharma

Well-Structured course

Course is well organized and simple to follow. The trainer is clear and concise in explaining all the technologies cover in this course. Go to course for anyone stepping into Hadoop. Strongly recommend.


review1.png

Akshay Kumawat

Excellent tutorials

So far, the course is very informative and is helping me validate what I already knew about Big Data with Hadoop while showing me several new (for me) features. Excellent course to begin with for anyone interested.


review1.png

Khushi Suthar

Perfect Course

This was a great overview of the technologies and how they work together; definitely need something more in depth on specific technologies if you are after targeted training on specific things. it was EXCELLENT! Thank you, SparkAcademy!!


review1.png

Jayesh Parashar

Cleared All Concepts

Concepts covered in this course is very helpful. The course was very detailed and trainer made sure to explain all the concepts needed for hadoop. Very Nice Explanation!!!! Glad I took this course.


review1.png

Ritesh Wavhale

Best Course

THAT'S UNDOUBTEDLY THE BEST COURSE ON HADOOP YOU FIND ON THE INTERNET. GO FOR IT. THE CALM AND SOOTHING VOICE OF THE TRAINER IS THE MAIN THING THAT I LIKE THE MOST ABOUT THIS COURSE. AMAZING.


review1.png

Manvi Sharma

Helpful Course

Course material is really helpful. The support team is awesome and very helpful in clarifying the doubts. Thank you SparkAcademy.


review1.png

Saurabh Kumar

Great Training

It was an amazing session. Thanks to the trainer for sharing his knowledge.


review1.png

Laxman Rathi

Well-constructed training

Spark Academy has provided valuable Big Data course as it has allowed me to enhance my knowledge of big data and provided me the opportunity to work with experienced industry professionals. I appreciate the tutor's in-depth knowledge and, the help and support provided by SparkAcademy. After the certification, I was able to grab a role change.


review1.png

Priyanka Kapoor

Best Course

One of the best trainings I have ever attended. Both the trainer's knowledge and patience are highly appreciated.


review1.png

Shanaya Singh

Loved the training

My trainer has been the best trainer throughout the session. He took ample time to explain the course content and ensured that the class understands the concepts. He's undoubtedly one of the best in the industry. I'm delighted to have attended his sessions.


Add Reviews & Rate

  • What is it like to Course?

    Course Features

  • Hadoop
  • MapReduce
  • Hive
  • Pig
  • Spark
  • Scala
  • Sqoop
  • HCatalog
  • AVRO
  • Scala REPL
  • SBT/Eclipse
  • Apache Kafka
  • Spark Streaming
  • Impala
  • Apache Flume
  •