, , , , , , , , , , ,

This is the first meetup I’ve attended in probably 18 months. The venue has changed to The Founder’s Hub about a year ago, which, considering it’s one of those hipster startup ‘incubators’, is not a bad place. They’ve got plenty of retro ornaments from the 90s dotted around, including this baby here:


What normally happens at a Unified Diff meetup is two or three guys each do a 30 minute presentation on something related to DevOps. Sometimes we share clever programming tips, sometimes we learn about some new programming library, sometimes we look at a new programming language altogether, and sometimes we learn novel ways of solving engineering problems. Last night’s event was all about Amazon Web Services, with the sessions by developers who already have stuff built and maintained in Amazon’s cloud, showing the processes they adopted through trial and error for running large-scale virtual systems.

I’m undecided whether to sign up for the AWS developer trial before my latest project (the case management and malware tracking system) is locally tested and a solid plan of how to migrate it to AWS is formulated.
That said, I’ve already provisioned a couple of CentOS-based LAMPP VMs for American clients (also using AWS) last summer. I’m also (fingers crossed) about to join a DevOps team that’s working on a similar (but much larger) project to mine, and I’m hoping to learn a considerable amount from them over the next 18 months.

The following are some notes I made, which might provide some handy pointers later on.

Ansible and AWS
The first talk was about setting up Virtual Private Cloud (VPC) systems using something called ‘Ansible‘, and another tool called ‘Vagrant‘ for local VM development and provisioning. The point of this is, if you’re going to be doing a lot of provisioning, you want a solid process that produces reproducible results – automated, ideally. Eucalyptus was also suggested for local testing before deployment. I also know Linux boxes can be made to fully simulate the AWS environment, so that’s another thing I could try.

Instead of manually assembling VM images when needed, the idea is to have a collection of modules do the work (listed in an ‘includes’ file), and create an Ansible script to run them in a sequence to generate the image. The Ansible script here mainly defined the network settup of the VPC, and the provisioning was completed within in less than five minutes in last night’s demo.
Another advantage is Ansible scripts can double as dynamic documentation, as they’re human-readable (see the Wikipedia entry).

The BBC VPC, Performance Monitoring and Admin Stuff
Next up were a couple of developers from the BBC, giving us an insight into how the corporation’s services are managed as a Virtual Private Cloud. Their solution was to set up a chain of ‘micro services’ on AWS, processing data from multiple sources.
Most of this talk was about the developers’ experience using tools like CloudWatch, a dashboard, custom metrics and Zenoss for service monitoring. I’ve seen around 12 highly skilled network operations people work long shifts to handle incidents on a similar network, but the devs here managed to automate most of that and reduce the workload to something manageable between them.

Immutable infrastructure
So the third talk presented yet another way of further reducing the amount of infrastructure-related bugs and the incident handling workload. The idea is called ‘Immutable Infrastructure’, based on the theory that it’s generally better to create and scrap VMs rather than modify them. Modifying VMs results in a VPC being in an unknown state, since the tendency is to apply hacks that remain undocumented and change behaviour unpredictably.
The concept originated with Chad Fowler, who seemed to have got the idea from the unit and integration software testing methods – software usually remains static and unchanging, so the state is always known under normal operating conditions. The same should be true for VMs with fixed configurations. Since the infrastructure is unchanging, automation should be a simple matter of deciding whether to leave a VM untouched or to delete it entirely.

And this is the lifecycle used for applying that to VPCs:
* Local development: Using Ansible and building on a base image.
* Testing
* Demo
* Staging
* Production