Dowload Lastest Books On Hadoop

Amazon Best Deal !!

Powered by Blogger.

Thursday, December 15, 2016

INTRODUCTION TO HIVE


The term ‘Big Data’ is used for collections of large datasets that include huge volume, high velocity, and a variety of data that is increasing day by day. Using traditional data management systems, it is difficult to process Big Data. Therefore, the Apache Software Foundation introduced a framework called Hadoop to solve Big Data management and processing challenges.

Hadoop

Hadoop is an open-source framework to store and process Big Data in a distributed environment. It contains two modules, one is MapReduce and another is Hadoop Distributed File System (HDFS).
  • MapReduce: It is a parallel programming model for processing large amounts of structured, semi-structured, and unstructured data on large clusters of commodity hardware.
  • HDFS:Hadoop Distributed File System is a part of Hadoop framework, used to store and process the datasets. It provides a fault-tolerant file system to run on commodity hardware.
The Hadoop ecosystem contains different sub-projects (tools) such as Sqoop, Pig, and Hive that are used to help Hadoop modules.
  • Sqoop: It is used to import and export data to and from between HDFS and RDBMS.
  • Pig: It is a procedural language platform used to develop a script for MapReduce operations.
  • Hive: It is a platform used to develop SQL type scripts to do MapReduce operations.
Note: There are various ways to execute MapReduce operations:
  • The traditional approach using Java MapReduce program for structured, semi-structured, and unstructured data.
  • The scripting approach for MapReduce to process structured and semi structured data using Pig.
  • The Hive Query Language (HiveQL or HQL) for MapReduce to process structured data using Hive.

What is Hive

Hive is a data warehouse infrastructure tool to process structured data in Hadoop. It resides on top of Hadoop to summarize Big Data, and makes querying and analyzing easy.
Initially Hive was developed by Facebook, later the Apache Software Foundation took it up and developed it further as an open source under the name Apache Hive. It is used by different companies. For example, Amazon uses it in Amazon Elastic MapReduce.

Hive is not

  • A relational database
  • A design for OnLine Transaction Processing (OLTP)
  • A language for real-time queries and row-level updates

Features of Hive

  • It stores schema in a database and processed data into HDFS.
  • It is designed for OLAP.
  • It provides SQL type language for querying called HiveQL or HQL.
  • It is familiar, fast, scalable, and extensible.

Architecture of Hive

The following component diagram depicts the architecture of Hive:
Hive Architecture
This component diagram contains different units. The following table describes each unit:
Unit NameOperation
User InterfaceHive is a data warehouse infrastructure software that can create interaction between user and HDFS. The user interfaces that Hive supports are Hive Web UI, Hive command line, and Hive HD Insight (In Windows server).
Meta StoreHive chooses respective database servers to store the schema or Metadata of tables, databases, columns in a table, their data types, and HDFS mapping.
HiveQL Process EngineHiveQL is similar to SQL for querying on schema info on the Metastore. It is one of the replacements of traditional approach for MapReduce program. Instead of writing MapReduce program in Java, we can write a query for MapReduce job and process it.
Execution EngineThe conjunction part of HiveQL process Engine and MapReduce is Hive Execution Engine. Execution engine processes the query and generates results as same as MapReduce results. It uses the flavor of MapReduce.
HDFS or HBASEHadoop distributed file system or HBASE are the data storage techniques to store data into file system.

Working of Hive

The following diagram depicts the workflow between Hive and Hadoop.
How Hive Works
The following table defines how Hive interacts with Hadoop framework:
Step No.Operation
1Execute Query
The Hive interface such as Command Line or Web UI sends query to Driver (any database driver such as JDBC, ODBC, etc.) to execute.
2Get Plan
The driver takes the help of query compiler that parses the query to check the syntax and query plan or the requirement of query.
3Get Metadata
The compiler sends metadata request to Metastore (any database).
4Send Metadata
Metastore sends metadata as a response to the compiler.
5Send Plan
The compiler checks the requirement and resends the plan to the driver. Up to here, the parsing and compiling of a query is complete.
6Execute Plan
The driver sends the execute plan to the execution engine.
7Execute Job
Internally, the process of execution job is a MapReduce job. The execution engine sends the job to JobTracker, which is in Name node and it assigns this job to TaskTracker, which is in Data node. Here, the query executes MapReduce job.
7.1Metadata Ops
Meanwhile in execution, the execution engine can execute metadata operations with Metastore.
8Fetch Result
The execution engine receives the results from Data nodes.
9Send Results
The execution engine sends those resultant values to the driver.
10Send Results
The driver sends the results to Hive Interfaces.

Sunday, December 4, 2016

INTRODUCTION TO APACHE PIG IN HADOOP



What is Apache Pig?

Apache Pig is an abstraction over MapReduce. It is a tool/platform which is used to analyze larger sets of data representing them as data flows. Pig is generally used with Hadoop; we can perform all the data manipulation operations in Hadoop using Apache Pig.

To write data analysis programs, Pig provides a high-level language known as Pig Latin. This language provides various operators using which programmers can develop their own functions for reading, writing, and processing data.

To analyze data using Apache Pig, programmers need to write scripts using Pig Latin language. All these scripts are internally converted to Map and Reduce tasks. Apache Pig has a component known as Pig Engine that accepts the Pig Latin scripts as input and converts those scripts into MapReduce jobs.

Why Do We Need Apache Pig?

Programmers who are not so good at Java normally used to struggle working with Hadoop, especially while performing any MapReduce tasks. Apache Pig is a boon for all such programmers.

Using Pig Latin, programmers can perform MapReduce tasks easily without having to type complex codes in Java.

Apache Pig uses multi-query approach, thereby reducing the length of codes. For example, an operation that would require you to type 200 lines of code (LoC) in Java can be easily done by typing as less as just 10 LoC in Apache Pig. Ultimately Apache Pig reduces the development time by almost 16 times.

Pig Latin is SQL-like language and it is easy to learn Apache Pig when you are familiar with SQL.

Apache Pig provides many built-in operators to support data operations like joins, filters, ordering, etc. In addition, it also provides nested data types like tuples, bags, and maps that are missing from MapReduce.

Features of Pig:-

Apache Pig comes with the following features −

Rich set of operators − It provides many operators to perform operations like join, sort, filer, etc.

Ease of programming − Pig Latin is similar to SQL and it is easy to write a Pig script if you are good at SQL.

Optimization opportunities − The tasks in Apache Pig optimize their execution automatically, so the programmers need to focus only on semantics of the language.

Extensibility − Using the existing operators, users can develop their own functions to read, process, and write data.

UDF’s − Pig provides the facility to create User-defined Functions in other programming languages such as Java and invoke or embed them in Pig Scripts.

Handles all kinds of data − Apache Pig analyzes all kinds of data, both structured as well as unstructured. It stores the results in HDFS.

Apache Pig Vs MapReduce:-


  • Listed below are the major differences between Apache Pig and MapReduce.
  • Apache Pig is a data flow language. MapReduce is a data processing paradigm.
  • It is a high level language.MapReduce is low level and rigid.
  • Performing a Join operation in Apache Pig is pretty simple. It is quite difficult in MapReduce to perform a Join operation between datasets.
  • Any novice programmer with a basic knowledge of SQL can work conveniently with Apache Pig. Exposure to Java is must to work with MapReduce.
  • Apache Pig uses multi-query approach, thereby reducing the length of the codes to a great extent. MapReduce will require almost 20 times more the number of lines to perform the same task.
  • There is no need for compilation. On execution, every Apache Pig operator is converted internally into a MapReduce job. MapReduce jobs have a long compilation process.

Apache Pig Vs SQL:-

  • Listed below are the major differences between Apache Pig and SQL.
  • Pig Latin is a procedural language. SQL is a declarative language.
  • In Apache Pig, schema is optional. We can store data without designing a schema (values are stored as $01, $02 etc.) Schema is mandatory in SQL.
  • The data model in Apache Pig is nested relational.The data model used in SQL is flat relational.
  • Apache Pig provides limited opportunity for Query optimization.There is more opportunity for query optimization in SQL.


In addition to above differences, Apache Pig Latin :-


  • Allows splits in the pipeline.
  • Allows developers to store data anywhere in the pipeline.
  • Declares execution plans.
  • Provides operators to perform ETL (Extract, Transform, and Load) functions.
  • Apache Pig Vs Hive
  • Both Apache Pig and Hive are used to create MapReduce jobs. And in some cases, Hive operates on HDFS in a similar way Apache Pig does. In the following table, we have listed a few significant points that set Apache Pig apart from Hive.


Apache Pig  Vs Hive:-


  • Apache Pig uses a language called Pig Latin. It was originally created at Yahoo. Hive uses a language called HiveQL. It was originally created at Facebook.
  • Pig Latin is a data flow language. HiveQL is a query processing language.
  • Pig Latin is a procedural language and it fits in pipeline paradigm. HiveQL is a declarative language.
  • Apache Pig can handle structured, unstructured, and semi-structured data. Hive is mostly for structured data.
  • Apache Pig is generally used by data scientists for performing tasks involving ad-hoc processing and quick prototyping. Apache Pig is used −


  • To process huge data sources such as web logs.
  • To perform data processing for search platforms.
  • To process time sensitive data loads.

Sunday, November 27, 2016

Hadoop - HDFS Overview


                             Hadoop - HDFS Overview  


Hadoop File System was developed using distributed file system design. It is run on 
commodity hardware. Unlike other distributed systems, HDFS is highly fault tolerant and designed using low-cost hardware.

HDFS holds very large amount of data and provides easier access. To store such huge data, the files are stored across multiple machines. These files are stored in redundant fashion to rescue the system from possible data losses in case of failure. HDFS also makes applications available to parallel processing.

Features of HDFS
It is suitable for the distributed storage and processing.
Hadoop provides a command interface to interact with HDFS.
The built-in servers of name node and data node help users to easily check the status of cluster.
Streaming access to file system data.
HDFS provides file permissions and authentication.

HDFS Architecture:-

Given below is the architecture of a Hadoop File System.


HDFS follows the master-slave architecture and it has the following elements.

Namenode:-

The namenode is the commodity hardware that contains the GNU/Linux operating system and the namenode software. It is a software that can be run on commodity hardware. The system having the namenode acts as the master server and it does the following tasks:

Manages the file system namespace.
Regulates client’s access to files.
It also executes file system operations such as renaming, closing, and opening files and directories.

Datanode:-

The datanode is a commodity hardware having the GNU/Linux operating system and datanode software. For every node (Commodity hardware/System) in a cluster, there will be a datanode. These nodes manage the data storage of their system.

Datanodes perform read-write operations on the file systems, as per client request.
They also perform operations such as block creation, deletion, and replication according to the instructions of the namenode.
Block
Generally the user data is stored in the files of HDFS. The file in a file system will be divided into one or more segments and/or stored in individual data nodes. These file segments are called as blocks. In other words, the minimum amount of data that HDFS can read or write is called a Block. The default block size is 64MB, but it can be increased as per the need to change in HDFS configuration.

Goals of HDFS:-

Fault detection and recovery : Since HDFS includes a large number of commodity hardware, failure of components is frequent. Therefore HDFS should have mechanisms for quick and automatic fault detection and recovery.

Huge datasets : HDFS should have hundreds of nodes per cluster to manage the applications having huge datasets.

Hardware at data : A requested task can be done efficiently, when the computation takes place near the data. Especially where huge datasets are involved, it reduces the network traffic and increases the throughput.

HDFS Video :-


Sunday, November 20, 2016

Introduction to Big Data Hadoop


 Introduction to Big Data Hadoop :-

Hadoop Architecture

Hadoop framework includes following four modules:
Hadoop Common: These are Java libraries and utilities required by other Hadoop modules. These libraries provides filesystem and OS level abstractions and contains the necessary Java files and scripts required to start Hadoop.
  • Hadoop YARN: This is a framework for job scheduling and cluster resource management.
  • Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
  • Hadoop MapReduce: This is YARN-based system for parallel processing of large data sets.
We can use the following diagram to depict these four components available in Hadoop framework.
Hadoop Architecture
Since 2012, the term "Hadoop" often refers not just to the base modules mentioned above but also to the collection of additional software packages that can be installed on top of or alongside Hadoop, such as Apache Pig, Apache Hive, Apache HBase, Apache Spark etc.

MapReduce

Hadoop MapReduce is a software framework for easily writing applications which process big amounts of data in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.
The term MapReduce actually refers to the following two different tasks that Hadoop programs perform:
  • The Map Task: This is the first task, which takes input data and converts it into a set of data, where individual elements are broken down into tuples (key/value pairs).
  • The Reduce Task: This task takes the output from a map task as input and combines those data tuples into a smaller set of tuples. The reduce task is always performed after the map task.
Typically both the input and the output are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
The MapReduce framework consists of a single master JobTracker and one slave TaskTracker per cluster node. The master is responsible for resource management, tracking resource consumption/availability and scheduling the jobs component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves TaskTracker execute the tasks as directed by the master and provide task-status information to the master periodically.
The JobTracker is a single point of failure for the Hadoop MapReduce service which means if JobTracker goes down, all running jobs are halted.

How Does Hadoop Work?

Stage 1

A user/application can submit a job to the Hadoop (a hadoop job client) for required process by specifying the following items:
  1. The location of the input and output files in the distributed file system.
  2. The java classes in the form of jar file containing the implementation of map and reduce functions.

Sunday, November 13, 2016

Career With Big Data Hadoop


Big Data Hadoop As Career :-
 A Question Arises In Mind Choosing A Career With Hadoop As A Fresher InTerms Of Jobs Scopes,Salary Packages,Skill Required's.

My Blogs Gives Clear Picture Of Hadoop and Give Complete Answer To All Questions !

Jobs Scopes :-


Here are some facts from IDC that favor the incredible growth of Hadoop and Big Data:
  • Research firm IDC is predicting a Big Data market that will grow revenue at 31.7 percent a year until it hits the $23.8 billion mark in 2016.
  • An IDC forecast shows that the Big Data technology and services market will grow at a 27% compound annual growth rate (CAGR) to $32.4 billion through 2017 – or at about six times the growth rate of the overall information and communication technology (ICT) market.
  • IDC sections its report and predictions into servers, storage, networking, software and services, predicting storage will see the biggest growth at a 53.4 % compound annual growth rate.
According to a research by Markets and Markets, the worldwide Hadoop & Big Data Analytics market is expected to grow to about $13.9 billion by 2017.

Skill Required :-