The code is experimental and proof of concept only – it has not been fully tested
The code runs as a Linux service
It features a web UI
It checks home energy consumption and decides whether to turn the plug on or off based on a threshold
For each check interval the code checks the current state of the plug and decides whether to:
Here’s a flowchart showing the decision-making process:
The Web UI
Ability to disable / enable automatic control
This is useful where the plug needs to be manually controlled via its physical button
Configurable Min power threshold
This is useful where it’s acceptable to use some grid power as well as solar (e.g. partly cloudy weekends with cheaper electricity rates)
Minimum on / off buffer periods to reduce switching (e.g. for devices which do not benefit from being powered on and off continually)
Monitoring messages to see how many times the switch has been controlled and its last state
Overall net ( W )
Useful for seeing current net household energy consumption
Automatic recovery if the plug, solar monitoring API or Wifi network goes offline temporarily
So far this solution works great.
On a partially cloudy day, the plug automatically turns on or off once excess solar drops below the min power threshold. Similarly, the plug will turn off when household consumption is high – for example, during the heating cycle of a washing machine / dishwasher or when an electric kettle is used.
We got an interesting email from our electricity retailer after setting up this solution:
The message indicates we have successfully boosted our self-consumption – i.e. more solar energy is being self-consumed rather than being exported to the grid, giving the appearance to the retailer that the solar PV system is underperforming. Success!
This is not quite as good as having a home battery or a dedicated (and much more refined) device like the Zappi, however it comes close. It is a great way to boost self-consumption of excess solar PV energy using software and a low-cost smart plug. With around a year of weekly charging, this solution can pay for the cost of the smart plug by reducing the effective cost of electricity.
It’s easy in the digital age to amass tens of thousands of photos (or more!). Categorising these can be a challenging task, let alone searching through them to find that one happy snap from 10 years ago.
Significant advances in machine learning over the past decade have made it possible to automatically tag and categorise photos without user input (assuming a machine learning model has been pre-trained). Many social media and photo sharing platforms make this functionality available for their users — for example, Flickr’s “Magic View”. What if a user has a large number of files stored locally on a Hard Disk?
49,049 uncategorised digital images stored locally
No easy way to search (e.g. “red dress”, “mountain”, “cat on a mat”)
Working with Hive can be challenging without the benefit of a procedural language (such as T-SQL or PL/SQL) in order to do things with data in between Hive statements or run dynamic hive statements in bulk. For example – we may want to do a rowcount of all tables in one of our Hive databases, without having to code a fixed list of tables in our Hive code.
We can compile Java code to run queries against hive dynamically, but this can be overkill for smaller requirements. Scripting can be a better way to code more complex Hive tasks.
Python to the rescue
Python code can be used to execute dynamic Hive statements, which is useful in these sorts of scenarios:
Code branching depending on results of a Hive query – e.g. ensuring Hive query A successfully executes before running Hive query B
Using looked-up data to form a filter in a Hive query – e.g. selecting data from the latest partition in a Hive table without needing to perform a nested query to get the latest partition
There are several Python libraries available for connecting to Hive such as PyHive and Pyhs2 (the latter unfortunately now unmanaged). Some major Hadoop vendors however decline to support this type of direct integration explicitly. They do, however, still strongly support ODBC and JDBC interfaces.
Python + JDBC
We can, in fact, connect Python to sources including Hive and also the Hive metastore using the package JayDeBe API. This is effectively a wrapper allowing Java DB drivers to be used in Python scripts.
The shell code (setting environment variables)
First, we need to set the classpath to include the library directories where Hive JDBC drivers can be found, and also where the Python JayDeBe API module can be found:
A metastore query can be run to retrieve the names of all tables in the default database into an arry (mysql_query_output):
# Query the metastore to get all tables in defined databases
mysql_query_string = "select t.TBL_NAME
from TBLS t join DBS d
on t.DB_ID = d.DB_ID
where t.TBL_NAME like '%mytable%'
mysql_query_output = curs_mysql.fetchall()
Hive queries can be dynamically generated and executed to retrieve row counts for all the tables found above:
# Perform a row count of each hive table found and output it to the screen
for i in mysql_query_output:
hive_query_string = "select '" + i + "' as tabname,
count(*) as cnt
from default." + i
hive_query_output = curs_hive.fetchall()
Done! Output from Hive queries now should be printed to the screen.
Pros and cons of the solution
Provides a nice way of scripting whilst using Hive data
Basic error handling is possible through Python after each HQL is executed
Connection to a wide variety of JDBC compatible databases
Relies on client memory to store query results – not suitable for big data volumes (Spark would be a better solution on this front, as all processing is done in parallel and not brought back to the client unless absolutely necessary)
Minimal control / visibility over Hive query whilst running