Popular Posts

Recent Posts

Separation of Duties and DevOps

By Bob Fischer & Tim Reaves
May 4, 2018 2:18:08 PM


It is common for companies to stall or slow in their quest to improve speed, quality, reliability, or security through a DevOps approach.  It is not technology that slows them down, but their culture.  They have practices and beliefs that make some aspects of DevOps seem impossible:

“Sure, developers at small start-up XYZ can put their own code into production, but we're in a regulated industry, and it would never work.”

“We need separation of duties. What you’re suggesting is impossible.”

“Audit or compliance would never allow a developer to test code.”

In this blog post, we’ve focused on separation of duties.  Separation of duties is an important concept and to some, it might seem to be incompatible with a DevOps approach, but it isn’t. In fact, in many cases the separation of duties in the context of DevOps offers more assurance of quality, security, and audit-ability than traditional approaches.

The intent of separation of duties is to mitigate fraud and errors. Let’s say I write some code that makes it seem like a vendor has submitted an invoice and been paid, but instead, it deposits that money into an account I control. Separation of duties is intended to limit or prevent me from doing that, and this is how:
  • Someone other than me would test my code.
  • Someone other than me would review my code.
  • Several others would manually approve deploying my proposed change to production.
  • Someone other than me would deploy my code.

The expectation is that my attempt at fraud would be detected, blocked, and I'd be arrested.

So how might this work in an organization implementing a DevOps approach in which the goal is few, or no, manual steps along the path to production?

Let's walk through an end-to-end example to see how the progression of code, from development to deployment, aligns with the goals of separation of duties in an organization using a DevOps approach.

  • The developer works on the new application feature in a feature branch of the source. When they are ready, they issue a pull request, which requires one or more people to review the code and approve the merger of the new code back into the mainline. This includes the feature itself and automated tests.
  • Once the code is checked in, a build is run that creates the executable modules that will be deployed to production. This is done via scripts and configuration directives that have also been peer reviewed, as they are under source control as well. The build system puts the executable into an artifact repository. This delivery of the program is not done by a person, but by the build system. Individuals do not have permission to upload an artifact manually, helping to prevent tampering.
  • A tester — either a person other than the developer or a series of automated tests — confirms that the new code works as intended and triggers a deployment to the next environment.
  • Deployment from environment to environment can be triggered via a manual approval process or automatically based upon the prior step’s outcome. Either way, the deployment is also done by a script that has been peer-reviewed. At no time does a person manually move any artifacts or manually deploy code or configuration.

All of the steps along the pipeline leave a comprehensive audit trail of who did what along with the outcome. This is far more reliable, secure, and audible than any manual process done by a system administrator involving moving and installing things by hand. At the end of the pipeline, the code is in production.

The scenario presented here is just one example of how integrity, security, and quality can be maintained in a value delivery pipeline. No matter what the technology stack, the same rules apply:

  1. Nothing is done manually with the exception of approvals (and even they are automated from a workflow perspective). Scripts and configuration are responsible for all parts of the process. An "approval" is a manual step that also invokes a script.
  2. All scripts and configuration are peer-reviewed and go through a testing process before they are used. If there are manual testing steps, they are done by someone other than the person who wrote the code.

So, how can a developer deploy their own code to production? The answer is that they don't. A script does, and the script's integrity has been reviewed by an independent person or group of people.

Getting the OK to work this way can be a challenge if it goes against established beliefs and practices. Often, controls become dogmatically enforced as their historic implementation and not their intent.  But to become a high-performing organization rapidly delivering value, it is essential to move to a new way of working.

 

Want to see more of DevOps at work? Check out our blog post, "What I Learned from Disabling Thousands of Production Desktops in a Retail Call Center"! 

Read the Blog Post

 

Subscribe to Blog Updates