The biggest theme to come out of Velocity was automation. It seems to be the essential ingredient to achieving and managing any substantial scale. Automation was the center of discussions around everything from security testing to rapid environment building to monitoring and analytics. Any highly repeatable process should, and indeed must, be automated in order to scale. Handling scale is not the only concern for implementing automation, but reliably repeatable results which are essential in proper testing environments. Automation replaces menial or repetitive tasks that humans are prone to error in and it frees up engineer's brain cycles for more interesting work solving problems.

Repeatable Environments

One of my favorite sessions of the conference was James Turnball's (Docker) interactive tutorial on an Introduction to Docker in which everyone was provided their own AWS instance prepared with Docker to experiment on. We were walked through the various processes involved in building and modifying Docker images, storing and retrieving them from Docker Hub and understanding how Docker can be used to develop repeatable environments that can be easily shared. Virtualization has long been the de facto method for producing portable environments for both development and production, but it suffers from significant overhead required to provide individual full operating stacks for each instance. Docker aims to reduce that overhead by leveraging Linux container functionality (not a new technology) to provide lightweight instances that share underlying OS systems. This makes it much quicker to create entire infrastructures with far less overhead. Each Docker instance is designed to provide a single service which exists somewhat like an process but has the advantage of it's own isolated OS environment in which to operate securely. I won't delve into the technology of Docker or Linux containers as there's plenty of information available elsewhere on the Internet, but needless to say the possibilities and promise of the technology are exciting.

To take the Docker technology further, software like Vagrant and CoreOS provide mechanisms for building entire Docker environments in repeatable ways. The former, Vagrant (see https://docs.vagrantup.com/v2/docker/index.html) can build extensive development environments that match production on a developer's workstation while CoreOS is aimed toward providing a foundation for production Docker infrastructures. Through the use of additional automation these infrastructures can be built, torn down & rebuilt with ease.

Security

James Wickett (Mentor Graphics) & Gareth Rushgrove (Government Digital Service) presented on a number of techniques to use automated testing infrastructures to implement repeatable security tests as part of regular code testing in Battle-tested Code Without the Battle. The tutorial featured the use of GitHub and Travis CI to illustrate how specific tests of vulnerabilities can be run against code each time it is committed. By using this level of automation in a repeatable way, known vulnerabilities can be detected and fixed before they're released into production. A number of suites exist that can be plugged into testing infrastructures that include vulnerability definitions that can be updated regularly to remain abreast of new vulnerabilities. By coupling security testing with functionality testing application code reliability can be greatly improved and the reputation of the business can grow stronger.

In another vein, Dale Hamel (Shopify) demonstrated how his company uses automated inventory tooling along with their automated configuration management systems to ensure they have full understanding of their infrastructure systems in his talk titled Source of Truth. By implementing these tools together, Shopify is able to have awareness of both existing systems throughout their data centers as well as be able to provision new systems as needed in predictable, repeatable ways. Having this insight into your infrastructure also improves security as there aren't systems that become forgotten or aren't managed appropriately.

Orchestration

It was an privilege to be able to listen to Mark Burgess, creator of CFengine, discuss orchestration in his session, Self-Repairing Deployment Pipelines in which he explained what orchestration means, how it should be used to ensure configurations through promises and how to approach the development of these configurations with simplicity in mind. He explained and demonstrated how CFengine can be used (much like Puppet, Chef or other configuration management tools) to define expected states of the infrastructure that are then applied in a way that fulfills the promises defined. He illustrated how in an orchestrated system, every component knows the script and its individual part while there is a conductor managing the harmony of each of these parts to make a functional mechanism. When we consider configuration management systems in this way it's easy to see how each piece operates together to form the whole system and how each part must perform as expected to make the entire system, in turn, perform as expected.

In all, automation was a very central theme throughout the conference. It is what drives the ability to innovate as known processes are automated and made repeatable and engineers are free to move on to solving unknown problems (which in turn can be automated once solved). Systems can grow in complexity with the confidence that they are reliable when automated testing and integration can be performed regularly with feedback loops that alert when an expected result is not being met. Systems can be built to scale horizontally using automation to repeat configuration and provisioning processes quickly and easily. Without the use of automation it is impossible to scale and certainly not reliably.