Wednesday, January 28, 2009

hadoop job visualization

Last week we had our first hack day at bizo.

We run a ton of hadoop jobs here, many of them multi-stage and completely automated, including machine startup/shutdown, copying data to/from s3, etc.

I wanted to put something together that would help us to visualize the job workflow. I was hoping that by doing this it would give us some insight into how the jobs could be optimized in terms of time and cost.

Here's a generated graph (scaled down a bit, click for full size) for one of our daily jobs:



The program runs through our historical job data, and constructs a workflow based on job dependencies. It also calculates cost based on the machine type, number of machines, and runtime. Amazon's pricing is worth mentioning. Each instance type has a per hour price, and there is no partial hour billing. So, if you use a small machine for 1 hour it's 10 cents, if you use it for 5 minutes, it's also 10 cents. If you use it for 61 minutes, that's 20 cents (2 hours).

As we can see above, a run of this entire workflow cost us $5.40. You can also see that there are a number of steps where we're spending considerably less than an hour, so if we don't care about total runtime, we can save money by reducing the machine size (here we're using all medium machines), or running with fewer machines.

I think the workflow visualization is interesting because it shows that you don't really care about total runtime at all. Since a number of the tasks can run in parallel, it's really the runtime of the longest path along that graph that drives how long you are waiting for the job to complete. In this example, even though we're using 176 minutes of machine runtime, we're only really waiting for 137 minutes.

This means that there are cases where we can spend less money and not affect the overall runtime. You can see that pretty clearly in the example above. "ta_dailyReport" only takes 16 minutes, but on the other branch we're spending 90 minutes on "ta_monthlyAgg" and "ta_monthlyReport." So, if we can spend less money on ta_dailyReport by using fewer machines or smaller machines, as long as we finish in less than 90 minutes we're not slowing down the process.

Yesterday, I decided to play around with the machine configurations based on what I learned from the above graph. Here are the results:



For this job we don't care so much about total runtime, within reason, so I made a few small machine changes at a few different steps.

Even though we're spending an extra 45 minutes of machine time, the longest path is only 13 minutes longer. So, the entire job only takes an extra 13 minutes to run, and we save $1.00 (an 18% savings!).

As you can see there are still a number of tasks where we're using less than an hour, so we could cut costs even further even if we didn't care too much about affecting the overall runtime.

All in all, a pretty successful hackday. My co-workers also worked on some pretty cool stuff, which hopefully they will post about soon.

A quick note on the graphs, for those who are interested. My graphing program simply generated Graphviz files as its output. Graphviz is a totally awesome simple text graphing language for constructing graphs like these. For a really quick introduction, check out this article. Finally, if you're interested, you can check out my raw .gv files here and here.

Tuesday, January 13, 2009

Is a Square a Rectangle?

Is a Square a Rectangle?

Really great breakdown of thinking about "is a square a rectangle" in the OO sense, and thinking about designing with Inheritance.