Amazon Web Services @

To say there was a lot of interest in the first Amazon Web Services (AWS) events here in Australia would be an understatement. How many vendors could put on an event without a glamorous new product launch, strategy update or market specific announcement and pull in close to 1,000 attendees in both Sydney and Melbourne? I guess that’s the difference between a market maker and a market follower, there’s no disputing AWS are the most powerful player in cloud computing and when they talk everyone wants to listen. I presented a customer case study at the AWS Melbourne event and provided an overview of our cloud strategy and the key role AWS plays in delivering this strategy. I’ve attached my slides to this post, here’s a summary to provide some context.

As a growth business we are scaling rapidly and IT has undergone significant growth with a 50% increase in our development headcount over the past 12 months. To be productive these developers need to test their code in an environment that looks, feels and behaves like and we need a highly efficient deployment pipeline to move code from developers brains to our sites as fast as possible.  As an ops manager I want my resources (dollars and people) focused on optimising our production environments and not on dev/test.

To address this we migrated dev/test environments from on-premise to the AWS cloud and developed a deployment pipeline that enables push button application deployment. Our deployment toolset leverages the fog ruby project which enables you to control a number of cloud services through a unified API and hooks into our Chef and Gitorious artefact repositories. Now any developer can push a button and have an end to end development environment to test their code against and, from a management perspective, I’m confident our deployment toolchain is cloud agnostic – if we decided to switch to Rackspace tomorrow we wouldn’t need to redevelop the tools and processes.

We recently invested in VMware’s Vsphere and Vcloud products to build a ‘private cloud’ across our global DC’s. We want to be able to deploy code to AWS or to our VMware environments using the same deployment processes. Unfortunately the Fog library didn’t support Vcloud so we decided to add it ourselves, check out our techblog for details on how to use this.

With the recent establishment of an Australian sales presence and the AWS cloud tour events there has been a lot of speculation that an Australian availability zone is imminent. The timing of an article published in the Australian the day of the Melbourne cloud tour event added even more fuel to the fire. Whether there’s any truth to the Australian article or not the general consensus within the industry is that it’s a matter of when rather than if AWS will have a physical presence in Australia. From my perspective this will have a huge impact on the hosting industry within Australia, AWS would become the first heavyweight global cloud provider to land on Aussie shores taking away many of the risk / data jurisdiction concerns that prevent large corporates and government from embracing cloud services today. It would also put significant cost pressure on some of the more established Australian hosting providers and drive a more rapid rate of innovation across the industry. However the biggest opportunity I see is to provide the catalyst for a new era of online entrepreneurship within Australia, there are so many Aussie success stories – think Atlassian & – that underline the tech savvy entrepreneurial culture within this country however getting these ideas off the ground is a significant challenge with the high cost, inflexible hosting market. Local players like Ninefold, Interactive and Telstra are all making moves in this space but the arrival of AWS would turn the industry on its head.

Judging by the 2,000 people who attended the AWS cloud tour events in Australia, I’m not the only one that sees this as something big.

Amazon Web Services @


Splunk Live Australia – My presentation

I’ve been at the Splunk live events in Melbourne and Sydney this week talking about how Splunk adds value to The sessions have been well received and it’s been great to here what innovation Splunk have coming down the pipeline and how innovative IT shops are applying Splunk to a wide range of business and IT issues. Here are my slides from the session

splunk live april 2011

Splunk Live and re-entering the blogosphere

Back in January I set myself a goal of blogging at least once a week and did a pretty good job until late Feb. Like many of us I’ve been flat out in the day job and not finding time, or energy, for reflection and discussion. Earlier this week over coffee I was discussing the importance of making time to stay abreast of the industry and networking with fellow professionals. My coffee date referenced a cartoon set in medieval times where a king is standing at the gates of his castle fighting off an invading horde, swinging his sword at a never ending army. Behind him is a man holding a gattling gun saying “do you have a minute to talk?” and the king, not looking behind him, responds “I don’t have time for a meeting”. I’ve spent the last few months fighting a few battles and it’s left me wondering how many gattling gun bearing meeting requestors I’ve declined.

Today I gave a presentation at Splunk live in Melbourne (I’ll also be at the Sydney event later this week) covering three use cases for Splunk at The presentation reminded me of the importance of active community involvement, networking, and staying abreast of the market. Over the course of my presentation today I think I may have provided a few people with their own gattling guns and I certainly saw some powerful weapons coming down the Splunk production line. But most importantly it was awesome talking to fellow IT professionals, sharing war stories, hearing what they are working on, problems they are facing, and innovative solutions they have come up with. So tonight I’m inspired to get back into the blogosphere 🙂 I can’t promise I’ll stick to my weekly blogging aspirations but I will do my best.

I’ll post my slides from Splunk live later this week

Building the team ops brand

I’ve said it before and I will say it again I’m so excited about the ops team we are building at Our latest recruit joined us today and brings with him bags of technical knowledge and a collaborative, warm, fun personality. We’re now at the point where the technical skills are taken as given, it’s the passion for technology combined with team building and collaboration skills that make the difference. Finding individuals who possess both the technical and interpersonal skills we are looking for is a challenge and we can’t fill the roles we need through referrals and agencies alone.

One of my objectives this year is to build the team ops brand. By blogging about some of the amazing technologies and projects we are working with, by tweeting when we’re having issues and how we fixed them, and by contributing to the open source community we can hopefully reach talented engineers and make them as excited about our ops team as I am.

We have a first cut of our engineering blog ready for review and I’m hoping the twitter feed will follow shortly. Our approach is to use a technology like Planet to aggregate existing blogs rather than trying to manufacture something just for the ‘external’ world. We will publish some basic guidelines on usage and leave our teams to get on with it – we don’t want to moderate content. As soon as we have this ready to roll I’ll share a link here and I’d love to get some feedback. I should also add that one of our most talented developers has been pushing for an engineering blog for ages and was the first to post! He was also the first person to comment on my blog so tip of the hat to Mujtaba Hussain – The IT world needs more people like him! Check out his personal site and blog at

Innovative products we use at – work in progress! (edited Jan 12th)

I spent a number of years working for large corporates where the focus for IT is maintaining big enterprise applications like ERP’s . There’s nothing wrong with SAP, Oracle etc but since I have been with I’ve been blown away by the innovation and passion within the open source community and the ecosystem of companies that provide supporting tools. Here are a few of the tools we are using within IT Operations at, I’ll keep adding to this list but here are a couple to get started

  • New relic ( – software as a service (SAAS) application performance management tool. Easy to deploy and enables us to drill into application performance across all levels of the stack (VM, app server, DB) and troubleshoot individual transactions, database calls etc. Supports java, php, .net. The team at New Relic are first class and super responsive. It has a free version if you want to try it out
  • Splunk ( – splunk indexes all your log file data generated by applications, servers and devices. You can then search, report, monitor this data in real time. Incredibly flexible with a fantastic reporting interface this is an incredibly powerful tool. The licensing model is terrible (based on the volume of log files indexed daily) but that aside it’s a great product. Like New Relic it has a free version if you want to try it out. Remora are the re-seller we deal with in Australia who are first class
  • Akamai ( – not really in keeping with the others on this list but worthy of note. Their CDN services help us to deliver property images super fast to our consumers and they have some amazing innovative products. Their Dynamic Site Acceleration (DSA) solution enables us to direct all our traffic through the Akamai network meaning we can speed up our site especially for overseas users. In addition we benefit from ‘SureRoute’ that will automatically divert traffic between our DC’s in the event of a failure. Awesome!
  • Soasta ( – Cloud based load and performance testing. Soasta are able to generate large quantities of user like traffic from public clouds such as EC2 and Azure. We create test profiles that simulate real users, crank up the load and sniff out bottlenecks in our environment. While we have had some issues with the service it’s been incredibly useful helping us to see performance issues that only occur under high load. We have been able to address these issues prior to go live and avoid a certain sev 1.