How we built a Customisable Cloud Hadoop using MapR and Brooklyn on EC2

In our recent blog post Customisable Cloud Hadoop: Automating MapR on EC2 using Brooklyn we demonstrated using Brooklyn and MapR to create a powerful and flexible Cloud Hadoop Cluster.

In this post we reflect on how we went about writing it, including some of the issues encountered. This post will be particularly useful if you are working through the source code.


We were asked to automate the rollout of a MapR Hadoop cluster in a public cloud using Brooklyn and jclouds.

MapR M3 is a distribution of Hadoop Map-Reduce, with a handy GUI and a number of practical conveniences beyond Hadoop (including simplified setup). Brooklyn provides open-source application management. jclouds provides multi-cloud portability.

MapR M3 Dashboard

In this case we only used Brooklyn to automate MapR’s deployment. Brooklyn’s advanced autonomic policy-based management was not used.

The base instructions we followed for MapR are at:


An initial manual install identified a few useful things that, while not surprising, are easy to overlook:

  • RAM >= 2.5GB, for the dd process and for MapR itself
  • The 64-bit OS requirement has to be explicit when requesting an instance from AWS.
  • root or sudo must run most of the commands
  • JDK must be installed
  • apt-get needs –allow-unauthenticated (as well as -y, of course)
  • Disk configuration has to go to /mnt on Amazon (that’s where the space is)
  • MapR user and password has to be set up on the filesystem (echo “$passwordn$password” | sudo passwd $user)

More interestingly we got familiar with the MapR architecture, which consists of three node types:

  • a number of Workers which run the MapR processes (mapr-warden)
  • two Zookeeper Workers which run the worker processes and Zookeeper
  • one Master which runs zookeeper, worker processes, and a few other processes; and this has to be started and licensed before the MapR process (warden) is started on the other machines

Brooklyn Structure

Brooklyn manages nodes in a hierarchy, and we wanted a simple hierarchy reflecting the MapR roles, with as much re-use as possible. After a bit of white-boarding we settled on:

  • M3 is the root entity, parenting the Master, two ZookeeperWorkers, and a cluster of Workers
  • M3NodeDriver contains much of the common work for starting nodes (apt-getting software, starting zookeeper if requested by the node), and delegating to the node for a “phase 2” of startup (as that varies)
  • AbstractM3Node is the parent class of the nodes, creating the driver and exposing abstract methodsisZookeeper and runMaprPhase2
  • WorkerNode extends AbstractM3Node, implementing runPhase2 which waits for the Master to be UP then starts the mapr-warden
  • ZookeeperWorkerNode extends WorkerNode, simply returning true for isZookeeper()
  • MasterNode extends AbstractM3Node also returns true for isZookeeper(), and does the additional steps needed for runPhase2

We wrote runMaprPhase1 in the driver, using creative names for the other methods for the individual steps:

public void runMaprPhase1() {
   if (entity.isZookeeper()) startZookeeper();

We wrote runPhase2 in the worker:

public void runMaprPhase2() {"MapR node {} waiting for master", this);
 getConfig(M3.MASTER_UP);"MapR node {} detected master up, proceeding to start warden", this);

and runMaprPhase2 in the master:

public void runMaprPhase2() {
 setupAdminUser(user, password);

This left a handful of methods to implement, all of which boil down to a few lines of code – usually bash commands invoked remotely. See, for example,

The Machines

It is easy to point Brooklyn at a fixed pool of machines when you need bare-metal speed, but cloud resources are convenient for experiments.

Brooklyn uses jclouds to make it easy to request machines in clouds, in a portable way, including requesting machine specs. We use the following code in AbstractM3Node:

protected Map getProvisioningFlags(MachineProvisioningLocation location) {
 Map flags = super.getProvisioningFlags(location);
 flags.templateBuilder = new PortableTemplateBuilder().
 flags.userName = "ubuntu";
 flags.inboundPorts = [ 22, 2048, 2888, 3888, 5660, 5181, 7221, 7222, 8080, 8443, 9001, 9997, 9998, 50030, 50060, 60000 ];
 return flags;

However, this brings us to the first issue we encountered, or rather the first set of issues. We didn’t start with the code block above (which works). We started with just asking for Ubuntu with 2560 megs of RAM.

  • We forgot the need for 64-bit.
  • We were first given a krazy karmic (Ubuntu 9) image , which was too old and missed all sorts of dependencies, so we locked down on the favourite 11.04
  • We discovered some errors about whether the default user is “root” or “ubuntu”, which we’ve fixed in Brooklyn with a flexible strategy. It is usually better to specify the default user, if you know it, as it isn’t obvious if you get the wrong user; it can hang (on older versions) due to or throw an invalid packet length exception (newer versions). The best way to fix it is to note the IP address (in the logs, or in the Brooklyn console) and manually ssh in, and you’ll get a friendly message saying to log in as “ubuntu” rather than “root”.
  • We forgot the security group, since everything was on in the group where we had manually created our first batch of servers; fortunately MapR supply a list of ports, and jclouds makes it easy to set up the relevant firewall rules. (Although this is not the list we started with, see below.)

Getting the Processes Running

With the machines up, we were able to get apt-get to install software, and were rewarded with some more challenging and fun (for odd values of fun) snags:

  • Oracle JDK can’t be automatically installed from them, for licensing reasons. Folks have moaned about this ad nauseum, and Oracle have sent some of the nicest cease-and-desist letters I’ve ever seen (apologising for how long it is taking them to move to a more permissive JDK binary license), but it seems there are two answers: users can supply Oracle ID’s to automate a download, or we can use Open JDK. We took the latter route for now.
  • sudo echo > file_owned_by_root: To update the apt repositories, we need to write to a file owned by root, but we’re not root. sudo needs a little bit of syntactic sugar to do this (unsurprising, but ate through one cycle):
 sudo sh -c 'echo xxx > /root/file'
  • Background processes reliably: The scripted process to create the disks, using dd if=/dev/zero of=/mnt/storagefile bs=1G count=20, initially timed out on the brooklyn side; not surprising as it can take 15+ minutes. We used a more flexible remote-monitoring approach (avoiding a persistent ssh session) which jclouds provides; this claimed to fail immediately, although looking on the machine it appeared it worked. This, and some related issues, led to some fun and games with how nohup and sudo pids and creation syntaxes, some interesting lessons, and ultimately a patch to jclouds ( ) and an issue in brooklyn ( ). One neat discovery (for pedants) is that if you kill your ssh session after the command is backgrounded, but want to ensure your process has launched, you should write:
good% nohup bash -c 'myproc & echo $! > pid.txt'

# not
bad% nohup myproc & echo $! > pid.txt

# because if you kill the shell immediately after, *nohup* might be
# killed _before_ it launches *myproc* in no-hang-up mode
  • Zookeeper ports: Zookeeper didn’t start. The logs showed them starting up normally, then dying. Eventually we discovered it was listening on 3888, which wasn’t exposed in the security group. Adding this inboundPorts above made it go much better. (
  • When and how to wait: When running the scripts manually, zookeeper will almost certainly come up before one can type the command to start the warden on the master. But when automating it, this isn’t the case, and it is required. Once we discovered this “happens-before” requirement, it is easy to write in Brooklyn using its dependent configuration, which blocks until sensors have a non-trivial/true value. This is shown below, along with the code to gather all the zookeeper hostnames when configuring MapR:
// in M3, collect the zookeeper names, hostnames, and an indicator of when they are up

List zookeeperNodes = ownedChildren.findAll({
(it in AbstractM3Node) && (it.isZookeeper()) });
AbstractM3Node.HOSTNAME, zookeeperNodes));
AbstractM3Node.ZOOKEEPER_UP, zookeeperNodes));

// in M3NodeDriver.configureMapR, get the hostnames (waiting for them if they aren't yet known)

String masterHostname = entity.getConfig(M3.MASTER_HOSTNAME);

String zkHostnames = entity.getConfig(M3.ZOOKEEPER_HOSTNAMES).join(",");

exec([ "sudo /opt/mapr/server/ -C ${masterHostname} -Z ${zkHostnames}" ]);

// in M3NodeDriver.startZookeeper, advertise when zookeeper is up
entity.setAttribute(AbstractM3Node.ZOOKEEPER_UP, true);

// in M3NodeDriver.start, before launching phase 2, wait for zookeeper:

The Prize

The good news is that none of these issues were that hard to resolve, although they all took a bit of time. All in all this was about a day’s work, and gives a reliable, multi-cloud MapR M3 deployment. Where:

  • jclouds gives multi-cloud capability for free
  • Brooklyn gives a high-level language way to manage tactical scripts (to easily see what goes wrong) and the resulting entities
  • Brooklyn gives us ongoing management features like cluster resize for free

Most of the issues resulted directly in improvements to products (Brooklyn and jclouds) – the virtue of open source – and documentation (MapR). And best of all, because the work is done (APL2’d), you can use it for free, and customize it to your requirements.