Introduction to Hive

栏目: IT技术 · 发布时间: 4年前

内容简介:Apache Hive is often referred to as a data warehouse infrastructure built on top of Apache Hadoop. Originally developed by Facebook to query their incoming ~20TB of data each day, currently, programmers use it for ad-hoc querying and analysis over large da

Apache Hive is often referred to as a data warehouse infrastructure built on top of Apache Hadoop. Originally developed by Facebook to query their incoming ~20TB of data each day, currently, programmers use it for ad-hoc querying and analysis over large data sets stored in file systems like HDFS (Hadoop Distributed Framework System) without having to know specifics of map-reduce. The best part of Hive is that the queries are implicitly converted to efficiently chain map-reduce jobs by the Hive engine.

Features of Hive:

  • Supports different storage types such as plain text, csv, Apache Hbase, and others
  • Data modeling such as Creation of databases, tables, etc.
  • Easy to code; Uses SQL-like query language called HiveQL
  • ETL functionalities such as Extraction, Transformation, and Loading data into tables coupled with joins, partitions, etc.
  • Contains built-in User Defined Functions (UDF) to manipulate dates, strings, and other data-mining tools
  • Unstructured data are displayed as data look like tables regardless of the layout
  • Plug-in capabilities for the custom mapper, reducer, and UDF
  • Enhanced querying on Hadoop

Use Cases of Hive:

  • Text mining — Unstructured data with a convenient structure overlaid and analyzed with map-reduce
  • Document indexing — Assigning tags to multiple documents for easier recovery
  • Business queries — Querying larger volumes of historic data to get actionable insights, e.g. transaction history, payment history, customer database, etc.
  • Log processing — Processing various types of log files like call logs, weblogs, machine logs, etc.

Coding in Hive

We will be using a table called “transaction” to look at how to query data in Hive. The transaction table contains attributes id, item, and sales.

DDL commands in Hive

DDL is the the short name of Data Definition Language, which deals with database schemas and descriptions, of how the data should reside in the database. Some common examples are

Create table

  • Creating a table — CREATE TABLE transaction(id INT, item STRING, sales FLOAT);
  • Storing a table in a particular location — CREATE TABLE transaction(id INT, item STRING, sales FLOAT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\001’ STORED AS TEXTFILE LOCATION ;
  • Partitioning a table — CREATE TABLE transaction(id INT, item STRING, sales FLOAT) PARTITIONED BY (id INT)

Drop table

Alter table

  • ALTER TABLE transaction RENAME TO transaction_front_of_stores;
  • To add a column — ALTER TABLE transaction ADD COLUMNS (customer_name STRING);

Show Table

Describe Table

  • DESCRIBE transaction;
  • DESCRIBE EXTENDED transaction;

DML Commands in HIVE

DML is the short name of Data Manipulation Language which deals with data manipulation and includes most commonly used SQL statements such as SELECT, INSERT, UPDATE, DELETE, etc., It is primarily used to store, modify, retrieve, delete and update data in a database.

Loading Data

  • Loading data from an external file — LOAD DATA LOCAL INPATH “” [OVERWRITE] INTO TABLE ;
  • LOAD DATA LOCAL INPATH “/documents/datasets/transcation.csv” [OVERWRITE] INTO TABLE transaction;
  • Writing dataset from a separate table — INSERT OVERWRITE TABLE transaction SELECT id, item, date, volume FROM transaction_updated;
  • Select Statement

    The select statement is used to fetch data from a database table. Primarily used for viewing records, selecting required field elements, getting distinct values and displaying results from any filter, limit or group by operation.

    To get all records from the transaction table:

    SELECT * FROM transaction;

    To get distinct transaction ids from the transaction table:

    SELECT DISTINCT id from transaction;

    Limit Statement

    Used along with the Select statement to limit the number of rows a coder wants to view. Any transaction database contains a large volume of data which means selecting every row will result in higher processing time.

    SELECT * FROM transaction LIMIT 10;

    Filter Statement

    SELECT * FROM transaction WHERE sales>100;

    Group by Statement

    Group by statements are used for summarizing data at different levels. Think of a scenario where we want to calculate total sales by items.

    SELECT item, SUM(sales) as sale FROM transaction GROUP BY item;

    what if we want to filter out all items which saw a sale of at least 1000.

    SELECT item, SUM(sales) as sale FROM transaction GROUP BY item HAVING sale>1000;

    Joins in Hive

    To combine and retrieve the records from multiple tables we use Hive Join. Currently, Hive supports inner, outer, left, and right joins for two or more tables. The syntax is similar to what we use in SQL. Before we look at the syntax let’s understand how different joins work.

    Different joins in HIVE

    SELECT A.* FROM transaction A {LEFT|RIGHT|FULL} JOIN transaction_date B ON (A.ID=B.ID);

    Notes:

    • Hive doesn’t support IN/EXISTS sub queries
    • Hive doesn’t support join conditions that doesn’t contain equality conditions
    • Multiple tables can be joined but organize tables such that the largest table appears last in the sequence
    • Hive converts joins over multiple tables into a single map/reduce job if for every table the same column is used in the join clauses

    Optimizing queries in Hive

    To optimize queries in hive here are the 7 rule of thumb you should know

    1. Group by, aggregation functions and joins take place in the reducer by default whereas filter operations happen in the mapper
    2. Use the hive.map.aggr=true option to perform the first level aggregation directly in the map task
    3. Set the number of mappers/reducers depending on the type of task being performed. For filter conditions use set mapred.mapper.tasks=X; For aggregating operations: set mapred.reduce.tasks=Y;
    4. In joins, the last table in the sequence is streamed through the reducers whereas the others are buffered. Organize tables such that the largest table appears last in the sequence
    5. STREAM TABLE and MAP JOINS can be used to speed up to join tasks

    以上所述就是小编给大家介绍的《Introduction to Hive》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

    查看所有标签

    猜你喜欢:

    本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

    更快速网站

    更快速网站

    Steve Souders / 2009年12月 / 42.00元

    对于任何成功的网站来说,性能是至关重要的。但伴随着不断增长的丰富内容和Ajax的过度使用,如今的Web应用已经将浏览器推至性能极限。在本书中,Google的Web性能专家和前任雅虎首席网站性能官Steve Souders提供了宝贵的技术,来帮助你优化网站性能。 作者的上一本书是非常畅销的《High Performance Web Sites》,它透露了80%的网页加载时间是花在客户端,使网络......一起来看看 《更快速网站》 这本书的介绍吧!

    JSON 在线解析
    JSON 在线解析

    在线 JSON 格式化工具

    HTML 编码/解码
    HTML 编码/解码

    HTML 编码/解码

    UNIX 时间戳转换
    UNIX 时间戳转换

    UNIX 时间戳转换