Useful queries for the Hive metastore

The Hive metastore stores metadata about objects within Hive.  Usually this metastore sits within a relational database such as MySQL. Sometimes it’s useful to query the Hive metastore directly to find out what databases, tables and views exist in Hive and how they’re defined. For example, say we want to expose a report to users… Continue reading Useful queries for the Hive metastore

Python + JDBC = Dynamic Hive scripting

Working with Hive can be challenging without the benefit of a procedural language (such as T-SQL or PL/SQL) in order to do things with data in between Hive statements or run dynamic hive statements in bulk.  For example – we may want to do a rowcount of all tables in one of our Hive databases,… Continue reading Python + JDBC = Dynamic Hive scripting

Useful date formulas for Hive

Hive comes with some handy functions for transforming dates.  These can be helpful when working with date dimension tables and performing time-based comparisons and aggregations. e.g. Convert a native Hive date formatted date string: date_format(myDate,’dd-MM-yyyy’) Return the week number (within the year) of a particular date – i.e. first week of the year is 1,… Continue reading Useful date formulas for Hive

Finding a sequence of events in Hive using analytic functions

Hadoop Hive features several useful functions for efficiently performing analytics over ordered sets of rows — these are known as the windowing and analytics functions.  For example, lead and lag functions can be used to produce rolling averages or cumulative sums over a window of time (e.g. hourly averages for some metric over the preceeding… Continue reading Finding a sequence of events in Hive using analytic functions

Running Spark on Yarn with Zeppelin and WASB storage

It’s increasingly said that “notebooks” are the new spreadsheets in terms of being a tool for exploratory data analysis.  The Apache Zeppelin project (https://zeppelin.incubator.apache.org/) is certainly one such promising notebook-style interface for performing advanced interactive querying of Hadoop data (whether via Hive, Spark, Shell or other scripting languages). At the time of writing Zeppelin is… Continue reading Running Spark on Yarn with Zeppelin and WASB storage

Selecting maximum of two columns – timestamp and sequence in Hive

Sometimes it is useful to sort data by two columns and get the maximum of both columns for each record (column A then column B). An example is a table with a logical primary key and an additional timestamp field and a sequence number field.  This could be the case, for example if data is… Continue reading Selecting maximum of two columns – timestamp and sequence in Hive