Extreme HR

Over the next few weeks I’m going to write more about how we hire and how we structure HR at QuickSchools. But first and foremost, I need to give it a name. It needs a label. That would make it easier to talk about and identify.

I thought about the phrase “Extreme HR”. Not sure if that’s the best name for it or not. I guess it is a bit extreme. Some parts of it are reasonably common these days – 100% virtual arrangements, results-oriented work environment and hourly compensations for example. But I think some parts of it are pretty damn extreme. Resume-blind hiring, for example. I get some pretty strange looks when I tell people I don’t look at resumes when deciding between candidates. Or offering 5 people the same job so I can do an apples-to-apples comparison. Yup, that gets a fair amount of debate.

But we’ve done this “Extreme HR” so many times that it feels totally natural at this time. And totally logical given recent trends in work structure. And of course the kicker – it totally works and takes so little time to find great people.

Well, I guess we’ll call it Extreme HR for now, till we come up with a better name!


My favorite web-app stack for medium-sized applications

I have an ongoing goal of creating many subscription web applications in 40 hours. If 500 Startups’ goal is to fund many startups with micro funding, my goal is to create many small startups with little time.

I try to meet these criteria when creating those web applications:

  • It should reside on as much free infrastructure as possible.
  • It should be ready to scale in case it takes off.
  • It should be revenue generating from day 1.
  • It should be designed to support thousands of simultaneous users.
  • It should require minimum maintenance from me. Once it’s live, it should keep on going.

I have a very strong opinion on the stack. Here’s what I’d go for:

Heroku for the cloud server infrastructure

Absolutely Heroku. I wouldn’t go with anything else right now. It’s a joy to use once you figure it out. It has a free tier which is very performant for websites that are not busy yet. It can be scaled with one command. It has easy-add add-ons like memcache when you need to scale. It uses Git for deployments. It monitors the app and brings up a new one if any instances crash.

Postgres on Heroku

No matter how awesome NoSQL is, I still need the ability to query and sort structured data. I try to use a combination of NoSQL (to store large data) and Postgres (only for light data that needs sorting and querying).

Postgres on Heroku is absolutely the way to go. It’s managed so I don’t have to do upgrades, maintenance etc. It has a bunch of common utilities like making snapshots and backups, cloning, etc. I don’t have to worry about underlying physical hardware. Win.

And I have lots of experience with Postgres and I know it can handle millions of rows with no problems.

Beware of using one-schema-per-customer though. That is Postgres’ Achilles heel. It dies under the weight of hundreds of schemas. Backups and restores take 10 hours or more. Even Heroku says don’t do it!

Amazon DynamoDB

For the NoSQL layer, I like DynamoDB. It is very fast both from my local machine as well as from Heroku (Heroku uses Amazon US East – Virginia fyi). I trust Amazon’s cloud infra. They use it internally and since it powers Amazon surely it can power my little app.

HTML5 + JavaScript single page app

I like creating a single-page app. You completely remove all the headache of passing data back and forth between pages.

Yes you do have to deal with some History/back/forward button issues but only once. You also have to deal with dynamically loading resources if your app gets big. But after that it’s single-page nirvana.

It’s wonderful to be able to code all your front-end stuff purely in JavaScript.

Build server-side services as REST API and use them via your front-end

It is no more difficult to build your server-side services as REST APIs. You may as well do so. This way you can easily open up the API in the future. Plus it makes a lot of sense.

Server-side language: Java

This is not a hard decision for my personal projects since I’m very comfortable with Java. However, I can see that NodeJS is attractive because it’s easier to hire JavaScript developers than Java developers.

App Server: Tapestry

This is not the most ideal app server. I was actually looking for something far more lightweight than Tapestry. For some reason I thought Tapestry was that light-weight solution. But it’s definitely doable for me. Key features: It uses Java. It has nice injection framework. It’s use of Maven and the Maven Repository is wonderful. Adding new modules is so easy.


Maven as a build tool is great, especially with Tapestry. Heroku for Java kinda needs it too. I just really like how easy it is to add new modules and then deploy to Heroku.


I especially like how I can push to GitHub to commit and then push to Heroku to deploy.


I really, really like the above web app stack. Cost effective, time effective, and ready to scale.

Tracking server configuration files with GitHub

I’m really digging Git and GitHub. It’s simple to set up, and GitHub provides a great hosted service.

So I wondered if it would be possible to commit configuration files from production and development servers onto GitHub. This way, we’d never get confused over changes, and if any of the servers were to die on us, we’d have the configuration files safe and sound. And if it were to work the way I thought it could, it would be as simple as adding any file, anywhere on the server, and committing it.

Alas, after a quick search on the web, the message I got was this: Git isn’t designed to track server configuration filles, and I should use one of the specialized configuration management tools out there. Git isn’t appropriate because:

  • It tracks all the contents in a working directory. It’s not great at cherry picking files from around the file system.
  • When you do a pull, you have to pull all the contents at once.

But darn it, I don’t want to have to install and learn a configuration management tool. And I couldn’t find any hosted solutions that was dead simple to use.

It took me a few days but finally the solution dawned on me. And, like most solutions that emerge only after your subconscious has a go at it, it IS dead simple. And yes, I can use Git. Score.

The idea is to create a central location where all your configuration files can be copied to, complete with directory structure. You then manage that central location as a normal git repository. The key is a script which will copy any configuration file to the git repo.

Here are the steps.

1. Create an empty repo on GitHub

I find it’s always easiest to create an empty repo on GitHub, and then to clone that.

2. Pull the empty repo to your server

You can do this on all your servers.

git clone {your github repo URL}

3. Create the script file, track.sh

I’m using linux, so modify as appropriate.

mkdir -p /home/maestro/{your git repo name}/`hostname``pwd`/
cp $1 /home/maestro/{your git repo name}/`hostname``pwd`/$1

This script file will copy whatever file you want to track to the git repo directory. It will insert the hostname so that all the files are separated by server.

4. Start tracking files!

To track a file, you first change to the directory of interest and then you run the script on the target file.

For example, let’s say I want to track my PostgresDB configuration file.

cd /var/lib/pgsql/data
track.sh postgresql.conf

Voila! Now the configuration file is in the git repo, and you can commit and push that as normal.

Tracking server configuration files via GitHub makes me happy.

My experience with Tropo has been good

I’m happy to say we’ve gone live with using Tropo for our international, outbound SMS messages.

In the early days, all I heard was Twilio, with the occasional mention of Tropo. Now that Twilio doesn’t support international SMS, I had to reinvestigate SMS options.

At first I tried Nexmo, but that didn’t work immediately, and I gave up right away and tried Tropo. True, I may have done something wrong with Nexmo, so I can’t say anything much.

In fact, even with Tropo my first attempt didn’t go well, but I decided to contact support and they helped me out.

After that, the experience has been great.  International SMS seems solid. They say they have years and years of doing this, so that makes sense.

Using Tropo to send SMS messages

(Special thanks to Mark Headd, developer evangelist from Tropo, for helping me out).

We’ve been using Twilio for a while now to make phone calls and send text messages. However, they removed their international SMS support, and I started looking around for another SMS provider.

So naturally I tried out Tropo, since I come across them once in a while. I did manage to get SMSes out, though there are some traps. Their API is also slightly confusing, but it works in the end.

Here are the steps, and some things to look out for, if you want to just get an SMS out.

Create a Tropo Scripting application. Associate it with a hosted javascript file that looks like this:

message(theMessage, {to:numberToDial, network:'sms'})

I accidentally copied a Groovy example from the Tropo blog rather than a javascript example, and that totally threw me off. In that example, the message parameter is called “message”, which of course interferes with the message() function. And the associative array is specified using square brackets. Yes, I know, not javascript, but when you’re in “I want to copy the example and make this work” the brain turned off a little đŸ™‚

Add a phone number to that Tropo application.

You can now send a message with a web browser:

http://api.tropo.com/1.0/sessions?action=create&token=<YOUR TOKEN>&theMessage=Hello&numberToDial=+1<YOUR NUMBER>

Note: The phone number must have NO spaces! It doesn’t work with spaces. Weird.

The java library also works (again, no spaces!):

String token = <YOUR TOKEN>;

Tropo tropo = new Tropo();
Map&lt;String, String&gt; params = new HashMap();

//params.put("numberToDial", "&lt;YOUR NUMBER NO SPACES&gt;");
params.put("theMessage", "Test from Tropo, Java. v5");
TropoLaunchResult result = tropo.launchSession(token, params);

iMovie imports HDV but plays it “slowly”

This weekend, we have a big family function (a family concert, actually), and dad tested out his HDV camera to make sure everything was working correctly.

It wasn’t.

Footage he captured from his Sony HDV camera (A PAL camera) would get imported into iMovie incorrectly. During import, things appear correct. But once the import completes, the length of the clip displayed suddenly becomes longer by about 20%, and playing back the clip shows that the playback is slower by about 20% and the sound is of lower pitch (consistent with slower playback).

I was convinced this was caused by some kind of NTSC – PAL framerate issue, since they differ by about 20%. I took the imported .mov clip into my windows PC, ran GSpot on it, and correctly the frame rate is 25 fps. So why is iMovie playing the video incorrectly?

I attribute it to some bug in iMovie 6. Dad has refused to upgrade to iMovie 11, but I finally convinced him and he will do it next week.

Meanwhile, I managed to find a workaround for the problem. I simply had to copy the clips out of iMovie onto a folder, and then copy it back… and voila, the clips now have the proper length and playback speed.

Disappointing experience with s3rsync.com

I’ve just signed up for Amazon Web Services S3 cloud storage solution. I need a place to store my large backup files, and iBackup.com is costing me tons of money.

I wanted a drop in solution for rsync, which is what is working smoothly right now with ibackup.com. I searched online for solutions, and s3rsync.com seemed to be a good solution.

The product does work… but it certainly was not a smooth experience, and ultimately I can’t achieve my end goal.

What I found:

  • I feel like I’m using a product from an amateurish company, or from a company that’s brand new and not yet mature.
  • For example: On the website, it says you can get up-to-date usage information. I would expect a portal that I can log in to from s3rsrync.com. No such luck: First, I have to access a website with a port 8080 address. Unbelievable, because not every firewall allows 8080 traffic through. Second, the log in is using browser authentication, as opposed to an html login page. As they say, that’s SO nineties. Finally, the “portal” turned out to be just an FTP site. Yikes.
  • All of which would have been fine, but alas, I discovered that the file in S3 appears as a TAR file with the date of the transaction. It does NOT appear as your original directory structure. So I can’t download individual files etc using the AWS console. To be fair, s3rsync did mention this in their FAQ but I guess I had to see it to understand it.
  • Finally, they require that I pay USD 20 straight up, before I even know it will work for me, i.e. ZERO ability to try the product. As it is, I know right away after testing that it won’t work, but I’m USD20 down. Sucks.