It’s easy in the digital age to amass tens of thousands of photos (or more!). Categorising these can be a challenging task, let alone searching through them to find that one happy snap from 10 years ago.
Significant advances in machine learning over the past decade have made it possible to automatically tag and categorise photos without user input (assuming a machine learning model has been pre-trained). Many social media and photo sharing platforms make this functionality available for their users — for example, Flickr’s “Magic View”. What if a user has a large number of files stored locally on a Hard Disk?
49,049 uncategorised digital images stored locally
No easy way to search (e.g. “red dress”, “mountain”, “cat on a mat”)
Working with Hive can be challenging without the benefit of a procedural language (such as T-SQL or PL/SQL) in order to do things with data in between Hive statements or run dynamic hive statements in bulk. For example – we may want to do a rowcount of all tables in one of our Hive databases, without having to code a fixed list of tables in our Hive code.
We can compile Java code to run queries against hive dynamically, but this can be overkill for smaller requirements. Scripting can be a better way to code more complex Hive tasks.
Python to the rescue
Python code can be used to execute dynamic Hive statements, which is useful in these sorts of scenarios:
Code branching depending on results of a Hive query – e.g. ensuring Hive query A successfully executes before running Hive query B
Using looked-up data to form a filter in a Hive query – e.g. selecting data from the latest partition in a Hive table without needing to perform a nested query to get the latest partition
There are several Python libraries available for connecting to Hive such as PyHive and Pyhs2 (the latter unfortunately now unmanaged). Some major Hadoop vendors however decline to support this type of direct integration explicitly. They do, however, still strongly support ODBC and JDBC interfaces.
Python + JDBC
We can, in fact, connect Python to sources including Hive and also the Hive metastore using the package JayDeBe API. This is effectively a wrapper allowing Java DB drivers to be used in Python scripts.
The shell code (setting environment variables)
First, we need to set the classpath to include the library directories where Hive JDBC drivers can be found, and also where the Python JayDeBe API module can be found:
A metastore query can be run to retrieve the names of all tables in the default database into an arry (mysql_query_output):
# Query the metastore to get all tables in defined databases
mysql_query_string = "select t.TBL_NAME
from TBLS t join DBS d
on t.DB_ID = d.DB_ID
where t.TBL_NAME like '%mytable%'
mysql_query_output = curs_mysql.fetchall()
Hive queries can be dynamically generated and executed to retrieve row counts for all the tables found above:
# Perform a row count of each hive table found and output it to the screen
for i in mysql_query_output:
hive_query_string = "select '" + i + "' as tabname,
count(*) as cnt
from default." + i
hive_query_output = curs_hive.fetchall()
Done! Output from Hive queries now should be printed to the screen.
Pros and cons of the solution
Provides a nice way of scripting whilst using Hive data
Basic error handling is possible through Python after each HQL is executed
Connection to a wide variety of JDBC compatible databases
Relies on client memory to store query results – not suitable for big data volumes (Spark would be a better solution on this front, as all processing is done in parallel and not brought back to the client unless absolutely necessary)
Minimal control / visibility over Hive query whilst running