Today I randomly had an issue where none of my Vagrant machines were booting up. Going into the Virtualbox GUI I tried to manually start the VM and I get the ominous, but somewhere helpful error code VERR_SUPLIB_OWNER_NOT_ROOT. Thinking back to stupid things I might have done today- I realized that I had done:

I did this because there was an application installed as administrator that constantly popped up the “This was downloaded from the internet…” popup and it was never going away. So I went in and chowned /Applications/ to root:admin and… no luck. Uninstalled and reinstalled VirtualBox… no luck. Finally I happened upon this old bug ticket from years ago ( that confirmed my suspicions. I simply had to run the following to get everything working again:

After that, everything works like a charm and I realize that I probably shouldn’t have done the chown -R in the first place.

SQL Server Error User Group or Role Already Exists in the Current Database

This is an issue that I come across frequently enough that I know where in my notes to go look for it whenever it happens, but infrequently enough that I don’t have it memorized. After restoring a database backup and trying to login you’ll often see a SQL Error like this:

If there are ways to fix this using SQL Management Studio alone, I haven’t spent the time necessary to find them, but the following SQL statement, when run on the affected database works to fix your orphaned user:

The resulting output should look something like this:

Review: Ignite Woo Wishlist & Gift Registry Pro

Not being entirely familiar with all the trappings of the WooCommerce ecosystem I set about finding a good Registry plugin for a friend’s (soon to launch) eCommerce website. Some initial searching lead me to the Wishlist & Gift Registry Pro plugin developed by Ignitewoo, a purveyor of many WordPress/WooCommerce plugins.

Limited Functionality

I really wish that I would have found done my due diligence before paying them any money- the plugin provides almost no functionality that isn’t offered by the completely free YITH Wishlist plugin. On top of that, there is no Registry functionality offered with this plugin either. It is all about Wishlists- with no real customizable functionality.

No Support

After paying Ignite Woo money I have been completely unable to reach anyone in support to answer a myriad of technical questions. I also had an issue with the license key they provided me and have still had no one reach out to me. I’ve contacted them via email, online chat and Twitter and have had absolutely nothing resolved.

Verdict: AVOID

I typically hate to trash other companies as there is no guaranteeing that the experience I have had with them is indicative of how they treat others. However, I have found that the WordPress forums are full of people who have gotten nothing but radio silence from this company. I would highly suggest you avoid their Wishlist & Gift Registry Pro plugin as it, literally, provides no additional functionality over any of the free alternatives. I would also recommend staying away from IgniteWoo in general- their product code is not very clean, their documentation is terrible and their support is non-existant.

I openly invite one of their representatives to get ahold of me. I’d love to work more closely with them and recant my position. Unfortunately we are two months in, they still have our money and have provided no service worthwhile to us.

Use Subqueries to Speed Up Count Distinct

This is an awesome post by the thoughtful folks at on how you can think about optimizing your queries (in this case COUNT DISTINCT) for maximum speed.

Read it here

DevOps for the Agency

Managing the Process and Tools of Multivariant Software Development

The Problem

DevOps at an agency or consultancy provides challenges not normally encountered at a software product company. Typical product and platform companies have a narrow band of operating systems and technologies they use, whereas agencies are often developing for every conceivable platform, language, and device. This can make the normal explanations of DevOps seem too server or ‘IT’ focused without a lot of help on how to manage order in the chaos matrix. The goal of this document is to explain some of the best practices available to your team(s) and give examples of how to use them in real life and some tools that can help with that.

A normal week in an agency’s development team can include deployments in multiple languages to Windows Server and Linux (of various flavors) running on hosted virtual machines, clustered in the cloud (typically Amazon Web Services, Azure or Rackspace), or even to bare metal. Without any process this is utter chaos, with the bare minimum of process you can mostly avoid egg in the face public failures- but are your servers secure? Can you deploy your projects to production reliably if your rock star engineer hits it big with the next Angry Birds and decides to retire to his own personal island? Probably not, yet.

Applying a good process to your current chaos doesn’t have to be that hard or painful. Material such as Visible Ops and the Information Technology Infrastructure Library’s Change Management guidance apply more broadly to change management at an organizational level including (but not limited to) configuration and hardware changes for servers and other broad topics. They don’t cover the intricacies that are involved with having different project types with vastly different technology stacks and clients and projects with different management methodologies. Process starting with the Software Development and Quality Assurance teams can radiate outwards and eventually encompass every team involved in software development projects at your company, even it if it is only a heightened awareness of the process.

Crawl, Walk, Run

A word of note: bite off meaningful chunks of change and work on instituting those before moving on to the next. These can be pushed from the software development team outward, depending on the organizational structure this would grow to include technical/business analysts, QA and project management. Start with the processes your team can affect and grow the process change to include more as buy-in grows. Don’t try to go from nothing to everything in a month. Your team will likely wind up creating process documentation, sample artifacts (like deployment checklists), and finally your team members and other teams will need to be trained and informed.

Version Control

If your team is not using a version control system (like Subversion or Git), then stop reading this right now and begin the process of integrating a version control system immediately. Without a version control system, no other steps in regulating code and configuration change are possible or meaningful.

The Environmental Landscape

For the purposes of this conversation we’ll assume there are some basic good practices being followed today; like having a development, QA and production environment. Because of the kind of projects I’ve been working on for the last half decade, we’ve typically had one more environment, staging. This is because production is often (at very least a slight majority of the time) not under the control of our agency but the client’s IT or development team. Staging becomes our in-house production and client acceptance environment.

Separation of Environments

It is exceedingly important that each environment (Development, QA, etc.) has it’s own resources. Environments should not share database tables, but could reasonable share database servers. For instance, Development and QA could use the same server, but different databases or schemas, so that the data and content are separated. The same thing with file systems, load balancers and application servers and the like.

The Development Environment as a Test Bed

The development environment should be the first environment deployed to. A continuous integration tool like Jenkins or Travis is fantastic for automatically building the project, running unit tests, and even deploying to the development environment at regular intervals or upon every version control check in. Development servers are for testing integration, server configuration and flushing out the deployment process. In practice, you should assume that development servers are always running the absolute latest code and are full of bugs in progress.

Every single change – code, configuration or otherwise – should be tested in this environment before ever moving up the chain to another environment.

Quality Assurance and Testing Environments

QA should be where completed bugs get tested by the quality assurance team and passed or sent back for further development. Deployments to QA should be at a much slower pace than to development environments but still could be as often as multiple times a day during periods of rapid development and testing (such as the night before a project is due).

Deployments to QA should come with a build number, version control revision number and a deployment log of all the bug tickets that are being addressed as well as notes on any other material changes.

Material changes will always include server configuration changes, database schema modifications and even changes to the deployment process itself. Technical leads, project managers and the QA team should all be made aware and sign off before deployments to this environment. In the absence of a staging environment, the QA environment will also be the environment that client review and approval should happen at.

Upon final approvals all around, this is the time where your team will want to tag the revision in version control, assign a version number to the build artifact and save it in ‘escrow’ somewhere. Any additionaly documentation such as staging deployment checklists or other process documentation should be filled out prior to deploying to staging.

Staging as a Mirror (and a Safety Net)

In the agency world, production is often not entirely under your team’s control, which can make it awfully difficult to troubleshoot. When production is a load balanced, highly scalable multi tier environment and the development environment is a simple, single virtual machine hosting the application and database; trouble shooting development does not necessarily fix production. Even if production isn’t locked behind the client’s IT and your team has unfettered access, a bad deployment in production is as bad as it gets. The whole world (or at very least, the client) will know that you’ve botched the job at the last minute. It’s a terrible feeling if you haven’t had the privilege of it yet.

Staging should be as close of an approximation as is feasible. Instead of a 10 node cluster like production, maybe just two VMs will suffice, for example. If production will be in Amazon Web Services, staging should be up there as well. If the persistence layer will be on a separate device, separate it in staging as well. Certain things like authentication and third party services may require staging to be publicly visible which is a decision your team will have to make. We’re looking for as close as feasible, not necessarily a direct copy. There may need to be some configuration the would have staging use mock services instead of the real thing- these are all things that need to be planned with the team while defining the feature that consumes the services.

Deploying to staging is the final practice before public failure and should be treated just like production, including restricted access and planned deployments and upgrades. It should require approval from the technical lead, QA and the project manager to verify that all the bugs that were to be addressed in the current build have been addressed and no regression was detected in the QA environment. This process should be tightly gated and auditable. Who deployed it, what was deployed and when it was deployed should all be recorded for posterity.

This will be the final pre-production deployment and is where client approval and sign off will happen if this environment exists in your process. Upon final sign off in this environment your team will generate the production deployment checklist and any other documentation that needs to be created in addition to the staging documentation that was created for this particular deployment.

Production – The Final Frontier

This is where client relationships can be destroyed in an otherwise healthy project. Nothing will shake your client’s confidence more than a lack of communication or bad production deploys. Bad production deploys and overall project delivery will always boil down to a simple fact of poor change management. On some projects the poor change management and communication are the fault of the client’s own IT department, other times it will be the fault of your team. It will always be your team’s problem, however. As much as 80%1 of the time spent fixing production delivery problems will be spent finding the root cause. That means that if proper change management procedures are being followed your team can have production running up to five times faster than they otherwise would have been able to.

When it comes time for production delivery, the team should have a well laid out battle plan that will include scheduled maintenance times, documentation and notes for every server configuration change, database scripts and code changes being deployed as well as the issue ticket references (the QA release notes will often suffice). If the environment is complicated with load balancers and multiple nodes, an order of operations should be established and documented and followed to the letter.

Process Flow

An Auditable Series of Steps

The appropriate amount undocumented change in any environment is zero. As dials and knobs are getting twisted in development environments it is important to record the changes being made and why. Application server and database configuration changes can have fantastic and devastating changes and are just as important as the new code and content being pushed. The most important part of each deployment is that exactly what is being deployed is documented and that someone is signing off on the deployment.

Deploying to QA

Going into QA should be a gated step that requires an okay from a technical lead and a QA lead (they may still be testing the last build!) along with some generic release notes. If your team uses a tool like Jira, Trac or other ticket tracking systems, the release notes for this step may simply be a matter of moving all the fixed tickets over to an “In QA” status or something similar.

Deploying to Staging

After QA has passed all the tickets and verified no regression has taken place, the technical lead and project manager should schedule a staging deployment. This should run just like a production deployment. Put things in maintenance mode, restart services, application pool restarts- the whole nine yards. This is your team’s final dress rehearsal and the last chance to validate the deployment process privately. If things don’t go well, document every issue and get a root cause. This will often times result in going back to development environments to test proposed changes and proceeding through QA and back into staging. With an automated build and deployment system this is an almost painless experience and the changes in destination environments is nearly immaterial.

After the deployment to staging, QA will want to do a final regression pass in that environment and any client sign off will need to happen. Prior to client review your team will be looking for obvious issues like content and data differences between environments (development and QA may be full of lorem ipsum and fake content after all) as well as testing that the proposed changes are working in staging as they did in QA. This is where each technology and team has their own flavor of doing things. Given the level of QA effort already provided, it would seem redundant to utilize the same amount of human time in staging- otherwise, why have staging AND QA environments? This is where things like remote unit and integration tests (like Arquillian for Java) can give a quick ‘green light’ to functionality in each new environment.

After that, version numbers need to be generated, version control source needs to be tagged or branched and if possible, the actual binary deployment should be compressed and held as an artifact somewhere. Build/Continuous Integration systems actually have functionality that generates build numbers, tags source and saves artifacts- all from the user interface. The process doesn’t need to be elaborate, just well documented and strictly enforced.

Deploying to Production

Deploying to production environments should follow the same (documented) steps as deploying to staging environments. If there are differences in configuration or environments they should be explicitly listed in a production deployment checklist that has been prepared with all the changes from the last production release.

Production Deployment Checklist

The production (and staging) deployment checklist should be a combination of two things: a standard deployment checklist that reflects all the steps that should be necessary for every production deployment (like where and how to copy the deployment artifacts), and any additional steps that are needed for this particular deployment based on the documentation from the QA and Staging environment deployments and release notes.

For example, database schema changes won’t necessarily occur every time, but when they do they should be completely scripted and ready to run, the deployment checklist would say when to run the scripts, which scripts to run, and detail the expected results.

The creation of this document will force technical leads to stop and think about what differences may exist between the staging deployment and this one as well as any other teams (like the client’s IT team) that will need to be wrapped in to the plan. It will allow for timing and order of operations and minimize the chance of exceeding the maintenance window that was scheduled.

Useful Tools

Tools will vary based upon the core technologies of your company, but if your team is producing a number of projects in various languages there are many tools that exist that can be strung together to build and deploy everything from .NET and Java projects to Python and PHP. In a follow up post I will discuss how I’ve been using these tools in real life to automatically spin up local environments for developers, development, QA, staging and production server environments; one click build and deploy, artifact management and source tagging and a whole host of other automated tasks that can keep the noise to a minimum.

You will want to check back regularly for edits, updates and links to follow ups!


Jenkins is an application that monitors executions of repeated jobs. It can continuously build and and test software projects as well as store build artifacts, generate build numbers and store test results. It also stores project health details and can notify team members when the build breaks.

Apache Ant

Ant is a Java library and command-line tool used for driving build processes that are defined in build files and targets. It is a very versatile tool that can compile software projects, run unit tests, copy (and move) files and do just about anything you would manually do when building a project.


Node has a variety of cross platform plugins (like SCP and SSH) that can be very useful for scripting automatic remote deployments, increasing repeatability and decreasing maintenance time.


Vagrant aids in creating and configuring lightweight, reproducible, and portable development environments. Get new developers spun up into a project quickly.

Chef and Puppet

Chef and Puppet are automation platforms for server/virtual environments. They allow your team to define the requirements of an environment and automatically set up and configure new machines without (much or any) user intervention. They can be used to document the target environments and manage any changes that need to be made. Both of these tools exist in the same space and are oustanding in their own right- your team will very likely choose either Chef or Puppet and not both.


Docker is a newer (as of 2013) entrant into the DevOps space that attempts to treat the environment, application code and configuration as single artifact (a container) to be deployed. It is being used for automating packaging and deployments to the creation of PAAS environments.


1. Behr, K. Kim, G. Spafford, G. The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps Information Technology Process Institute

Installing Tomcat 7 with Apache and mod_jk

Installing Java, Tomcat 7, Apache 2 and mod_jk in Ubuntu 12.04

Why would you need to install Apache and mod_jk to run a Java web application on your server? Well, you don’t have to, but there are a lot of reasons why you would want to. Removing :8080 from all the requests, rewriting URLs and handling static assets, performance (loadbalancing/clustering/CDN/etc.) and a whole host of other reasons. This guide assumes you have a brand new server up and running with nothing really installed yet.

The following is an extremely vanilla setup that I would consider my normal baseline configuration, the bare minimum to get Tomcat and Apache talking successfully. This will mainly aid in allowing all your web requests to be funneled through port 80 (instead of the normal Tomcat 8080) and allowing static assets to be served that exist outside of your application’s WAR file- think uploaded user images and such.

Install Java

First things first, without Java installed on your server, none of this works.

Go to the Oracle Java Download page and choose the proper JDK for your environment. Remember: A JRE is not enough for running Tomcat or other Java application/container servers.

Follow the instructions for unpacking and installing the JDK to your file system. This example assumes you installed the JDK to /usr/local

Install Tomcat

Download the latest Tomcat version, for this post we’ll be using Tomcat 7.

Unpack Tomcat and move the contents to where you want them. For this post, we’ll be using /usr/share/tomcat7

Install Apache and mod_jk

For Ubuntu, we’ll simply use the apt package manager to install these:

Configure the default site to use the default AJP worker. For this example, we will be configuring all requests to go to Tomcat

Edit /etc/apache2/sites-available/default:

Server Specific Configurations

These steps are not entirely necessary and can vary from OS to OS depending on how servers in your environment are normally set up.

Update alternatives to point to your Java install:

Edit /etc/environment and add the following:

Create a tomcat user for running the server and assign ownership of $TOMCAT_HOME (this is necessary for the Tomcat init script in the next step):

Install this tomcat7 init.d startup script that I created for better startup/shutdown of your server to /etc/init.d/tomcat7.

Simulating Slow Connections in OS X or Linux

Simulating Slow and Laggy Connections


Do you want to simulate how it feels to load your site from a mobile connection (If it’s AT&T just turn off your network for an accurate simulation- I kid, don’t sue. But seriously AT&T, figure it out.) or from a laggy network? In OS X or Linux, you’ve got everything you need already installed: ipfw (IP firewall and traffic shaper control).

Create a Pipe

Configure a pipe with the appropriate bandwidth (I’ve also added a 200 ms response delay in this example).

Attach the Pipe

In this example we’re going to use port 80, but you can also use port 443 or any other port that you may be testing communication on. Additionally, you can attach the pipe to multiple ports.

That’s it!

Wait a second, you say! Now your network connection is completely throttled and everything is running terribly! You need to delete both ipfw entries and the pipe that were previously created.

Here’s how you undo what you’ve just done:

Subversion: Merging a Branch to Trunk

Merging a Subversion Branch Back to Trunk

So, you’ve successfully branched code and have a great new feature set you need to get back to trunk, but how? Easy!

Get the latest revision number of the branch you want to merge

If you already have the branch checked out, cd to that directory and svn update it.

Alternatively, if you don’t have (or don’t want to have) the branch checked out locally, you can do it remotely.

The last line returned contains the first revision of that branch, it’ll look something like this:

Check out trunk

Alternatively, you can just update trunk if you have it checked out

Change to the trunk working directory and do the following:

You should see output like this:

This will update your copy of trunk to the most recent version and tell you the revision you are at. Make note of that number as well (should say “At revision YYYY”; where YYYY is the second number you’ll need to remember).

Perform the merge

Now you’ve got the details you need; the revision at which the branch started and the most current revision.

Change directory to the trunk

Check in the results:

Final thoughts

Described above is how to perform the act of merging a branch back to the trunk, there are however many things I didn’t cover. Before you perform this operation you should do a review of all the changes and potential conflicts if the trunk was also receiving active development.

In a team environment, this would be a great time for a peer/mentor code review as well as an approach review for the bugs/featured covered in the branch.

Deploying New Relic on AWS Elastic Beanstalk with Tomcat

It’s relatively straightforward and New Relic’s docs show you how to do it just fine for Tomcat 6. First, go look over New Relic’s documentation, I’m not going to repeat all the details they show there.
The only difference is the following:
At the Container JVM Command Line Options field, instead of the -javaagent string New Relic recommends, put this in instead

Setting the Logging Environment Name

You can also set the logging environment name in the same line. After the -javaagent string you justed enter, put in this:

If you read through the newrelic.yml file, you can see all the different options that switching environments can give to you besides just a different name in the New Relic dashboard.

Encrypting a tar or gz (gzip) File with OpenSSL

When you have sensitive data that you need to transmit but want to make it easy to encrypt and decrypt it, use some standard tools to get the job done!

I recently had an issue where a client was using OS X laptops running an Admin panel written in PHP on MAMP in an environment that may or may not have an internet connection. The problem was that they needed to be able to dump their database data into an encrypted file so that they could send the data off when they get a connection (via email, upload, who knows). My initial response was to use gpg to encrypt the file and hand out the keys to the people who would eventually be reading the data.

Turns out, this was going to be a nightmare and I needed something ‘easier’. How about encrypting a tar file with OpenSSL? Bingo! This solution uses utilities that are already on the machine and no installations need to be performed. The reason this was such a big deal is because the laptops running this software will be all over the world with various levels of technical acumen and it will be a nightmare to make sure every single laptop has been updated correctly.

Encrypting Your File

tar and gzip the file, then encrypt it using des3 and a secret key.

That simple!

Decrypting Your File

Essentially, just call all the commands in the reverse order.

Download the Utility Scripts

Download them!