Development environments are a big part of every team’s work. The first time a new person joins a team they are (hopefully) pointed to some README listing everything they need to setup. This approach works for small teams or teams that do not grow often. The process does not scale for teams undergoing rapid growth or when there is not enough time to handhold people through the setup process. Also it’s quite likely to go wrong because each person is manually repeating the steps each time.

I faced this problem in my day job. Our team are polygot but we ship 90% of things with docker. We also have a more complex internal toolchain that all engineers use on a day to day basis. Our setup is a mix of manual unique snowflake setups and some automated bits. All the while the company was brining in new engineers every couple of weeks with the intent to grow the entire engineering department by 3 or 4x. It was my responsibility to solve this problem to ensure all members of the engineering team had access to a standardized environment and that new team members would get going quickly.

I had already been working on something similar for Slashdeploy on a smaller concept. I knew the Slashdeploy setup would scale to meet the requirements at the day job. This post documents the solution I put together for Slashdeploy and shares the source.

vagrant & vagrant-workstation

My solution is vagrant with a layer on top. Vagrant is a wonderful tool to keep in the toolbox. VM’s are a wonderful way to encapsulate complex systems. They are easily built with automation systems. I opted to use ansible for configuration management inside the VM. Ansible runs inside the VM so there is no need to maintain ansible toolchain on the host system.

Vagrant manages shared folders to make their source code (or other relevant directories) available in the VM. Shared file systems are nice bridge between the host and guest systems. This enables the VM to encapsulate the toolchain required to build/test/deploy the software. The host system can be configured exactly how the user desires. This hits a sweet spot for many developers. They edit and work with their own editor, dotfiles, shell, and any other workflow optimizations they’ve created.

Vagrant provides an excellent abstraction for managing single projects (e.g. each project as a Vagrantfile). This abstraction broke down for us because we use [docker][] 90% of our work. We have many different code repos that all share the same general tool chain. It’s infeasible to create a Vagrant VM for every single project. The solution is to invert the problem: provide a single VM that includes the toolchain and mount all projects via a shared file system.

vagrant-workstation solves this problems. It provides easy access to running project specific commands like make or docker build inside the VM and inside the correct directory. Here’s an example. Assume you keep all code in ~/work on your host system. Your current directory is ~/work/project-a. Then you run vagrant-workstation run make test. vagrant-workstation sees that you’re in ~/work/project-a and runs make test in the correct directory. Now you notice a problem in ~/work/project-b so you change directory and run its test suite. vagrant-workstation also provides vagrant-workstation exec for commands that are not project specific. This allows to invoke our internal tools (which are not project dependent) from any directory on the host system.

Pratical Implementation

The Slashdeploy workstation git repo contains everything required to get going. It contains a few important bits:

  1. vagrant-workstation committed as a submodule so there is never a conflict of what vagrant-workstation is required for this project.
  2. The slashdeploy-workstation command for all team specific functionality.
  3. script/host-check and script/configure-guest. I’ll talk more about this later.
  4. The ansible playbooks to configure the guest VM.

The slashdeploy-workstation command wraps the necessary calls. Using a wrapper command provides a few key benefits:

  • Sets WORKSTATION_NAME environment variable so slashdeploy-workstation may be invoked from anywhere on the host system. vagrant-workstation supports multiple VMs, so this commands always targets this specific one.
  • Allows layering of team specific requirements on top of vagrant-worktation/vagrant
  • Provides all users the same shortcuts and handiness without having to create their own shell specific changes. Users are recommended to alias this to something short. I use sd.
  • Create subcommands for exec‘ing commands from anywhere.

script/host-check and script/configure-guest are the most interesting parts. script/host-check runs on the host system and verifies pre-requisites (such as vagrant installed, proper environment variables set, and other things). script/configure-guest handles per-user configuration that needs to happen in the VM. slashdeploy-workstation provision reads all environment variables prefixed with SLASHDEPLOY. These values are written to file accessible in the VM. script/configure-guest sources the file, then uses the values to do specific configuration.

Here’s how an example for AWS per-user AWS keys work. The user exports SLASHDEPLOY_AWS_ACCESS_KEY_ID and SLASHDEPLOY_AWS_SECRET_ACCESS_KEY on their host system. The slashdeploy-workstation provison runs which runs script/configure-guest.

Here is the configure function called during the provision command. See the complete source for the full slashdeploy-workstation file.

# Capture all SLASHDEPLOY prefixed environment variables in a file
# that will be accessible in the VM. Then run script inside the VM
# to configure the guest. This speeds up the process drastically because
# there is no need to load vagrant for every single command.
configure() {
	local line key value

	rm -f "${workstation_root}/.host_env"

	while read -r line; do
		# Requote each environment variable
		key="$(echo "${line}" | cut -d '=' -f 1)"
		value="$(echo "${line}" | cut -d '=' -f 2)"

		echo "export ${key}='${value}'" >> "${workstation_root}/.host_env"
	done < <(env | grep SLASHDEPLOY)

	"${workstation_bin}" exec "/vagrant/script/configure-guest /vagrant/.host_env"
	rm "${workstation_root}/.host_env"
}

Now for script/configure-guest

configure_aws() {
	aws configure set "profile.slashdeploy-internal.aws_access_key_id" "${SLASHDEPLOY_AWS_ACCESS_KEY_ID}"
	aws configure set "profile.slashdeploy-internal.aws_secret_access_key" "${SLASHDEPLOY_AWS_SECRET_ACCESS_KEY}"

	echo "export AWS_PROFILE=slashdeploy-internal" > ~/.shell.d/aws_profile.sh
}

This approach has been working very well for Slashdeploy and also the engineering department at my full time job. I encourage you consider this approach for your team/organization.

The source code for the Slashdeploy workstation is public so anyone can see how it works. So fork and try it out for your company. Happy shipping!