Problem starting HBASE master on Hadoop with Cloudera

After formatting the Hadoop HDFS Namenode and trying to restart the Hadoop cluster in Cloudera I encountered thisfatal error on the HBASE master, preventing HBASE from starting at all:

Unhandled exception. Starting shutdown.
org.apache.hadoop.hbase.TableExistsException: hbase:namespace
at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:133)
at org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable(TableNamespaceManager.java:232)
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:86)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1069)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:942)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:613)
at java.lang.Thread.run(Thread.java:745)

After unsuccessfully trying to fix this error by removing the /hbase directory on HDFS, I stumbled across the solution to clear the /hbase directory via a the Zookeeper service client:

azureuser@master:/$ zookeeper-client
Connecting to localhost:2181
2015-01-24 02:17:31,535 [myid:] – INFO  [main:Environment@100] – Client environment:zookeeper.version=3.4.5-cdh5.3.0–1, built on 12/17/2014 02:46 GMT
2015-01-24 02:17:31,540 [myid:] – INFO  [main:Environment@100] – Client environment:host.name=master.hadoopnet
[…]
2015-01-24 02:17:31,737 [myid:] – INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1235] – Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x14b19b8aa4f000b, negotiated timeout = 30000
[zk: localhost:2181(CONNECTED) 0]
WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2181(CONNECTED) 1] rmr /hbase
[zk: localhost:2181(CONNECTED) 2] quit
Quitting…
2015-01-24 02:17:56,579 [myid:] – INFO  [main:ZooKeeper@684] – Session: 0x14b19b8aa4f000b closed
2015-01-24 02:17:56,579 [myid:] – INFO  [main-EventThread:ClientCnxn$EventThread@512] – EventThread shut down

Which fixed the issue! Restarting HBASE via Cloudera now succeeds without the above error.

Seems like Zookeeper has a filesystem-like concept called znodes: http://blog.cloudera.com/blog/2013/10/what-are-hbase-znodes/ which is not to be confused with HDFS in a Hadoop cluster.