DevOps and Separation of Duties

Despite the rapid growth of DevOps practices throughout various industries, there still seems to be a fair amount of trepidation, particularly among security practitioners and auditors. One of the first concerns that pops up is a blurted out “You can’t do DevOps here! It violates separation of duties!” Interestingly, this assertion is generally incorrect and derives from a general misunderstanding about DevOps, automation, and the continuous integration/deployment (CI/CD) pipeline.

What is “Separation of Duties”?

First, it may helpful to understand what “separation of duties” (aka SoD or “segregation of duties”) is and what purpose it serves. You can read various write-ups defining SoD from Wikipedia, SANS, and the AICPA. SoD is an internal control intended to reduce the incidence of errors and fraud in a system. At base, the belief is that having 2 or more people involved in creating and reviewing changes (whether to code or configs) is a net positive. I like the AICPA’s definition:

“The principle of SOD is based on shared responsibilities of a key process that disperses the critical functions of that process to more than one person or department. Without this separation in key processes, fraud and error risks are far less manageable.”

I don’t think anybody would argue that having an extra set of eyes on various system changes is helpful and generally a Good Thing (TM). However, there’s an obvious glaring hole in SoD, which is that it doesn’t (and cannot) account for collusion. So, in smaller team environments, it’s still possible to have errors and fraud survive to production, despite the internal control being in place. This is readily acknowledged in literature.

It’s important to understand the core value proposition represented by this internal control, because it will enable you to explain to auditors how you are still achieving these objectives in DevOps and CI/CD.

3 Myths of SoD vs DevOps

Myth 1: DevOps + CI/CD Means Pushing Straight to Production

First and foremost, if you drill into concerns about meeting SoD requirements in DevOps, you’ll often find that security and audit peeps are misinformed. There is a misimpression that having a CI/CD pipeline in place means developers are pushing code straight from their IDE to production with no oversight, testing, etc. Ironically, nothing could be farther from the truth. In fact, it’s still relatively uncommon today for the CI/CD pipeline to be fully automated end-to-end. Moreover, in most organizations it’s exceedingly rare to have only 1 person managing everything from dev to test to ops and deployment. All but the smallest startups will typically have at least 1-2 devs to code and an ops person to handle environment management (and deployment).

Myth 2: SoD Is Effective At Stopping Fraud and Errors

One thing we know beyond a reasonable doubt is that errors in systems continue to occur, exist, and persist, no matter how much SoD is in place (or testing, or oversight, or QA time, etc, etc, etc). As for fraud, I personally believe DevOps and CI/CD makes it easier to detect fraud, not to mention it greatly reduces the cost of rolling back changes (“fail fast, recover fast, learn faster”). As such, while having an extra set of eyes absolutely *is beneficial* for reducing errors, there will always be a point of diminishing returns. Moreover, humans cannot achieve the velocity necessary to keep up with the pace of business in this modern, cloudy, sometimes serverless world.

Myth 3: SoD and DevOps Are Incompatible

Few things are more galling than being told out-of-hand that, so sorry, but what you’re proposing to do (DevOps!) simply cannot be allowed because there’s no way to be compliant with our internal control requirements. UGH! Not only is this a gross misrepresentation of how to use internal controls within an organization, but it reflects a resistance to change that effectively quashes innovation, creates shadow IT, and drives people to work as far away from oversight as possible. If you want to see your business crash and burn through excessive fragmentation, then play the “jack-booted thug” role where you tell everyone “no” to anything that doesn’t conform to an antiquated, miniscule world view. There are absolutely, positively straight-forward ways to meet SoD compliance requirements within DevOps practices. In fact, CI/CD pipelines provide ample opportunity to exceed legacy practices to actually reduce errors and fraud.

SoD Compliance in DevOps+CI/CD

Alright, at this point you’re probably either thinking “Yes! I knew it!” or “Ok, wise guy, so how exactly can I make this work?” I’m glad you asked! It’s actually quite straight-forward, though it does require some engineering effort.

For starters, the “right way” to tackle this issue (idealized a bit) is as such in a CI/CD pipeline:

sample CI/CD pipeline

Within this process today you’ll still see a lot of manual intervention. However, going forward we /need/ to see heavy use of automation throughout the entire CI/CD pipeline. The IDE should have integrated lint-like checks for code quality /and/ code security. The repository should be scanned on a regular, recurring basis by both a static application security testing (SAST) tool *and* a software composition analysis (SCA) tool (SCA checks libraries and functions/methods for versions with known vulnerabilities). Later in the pipeline we also add dynamic application security testing (DAST) in addition to standard code quality testing.

UAT testing can also be heavily automated, especially when leveraging a test-driven development (TDD) methodology. Infrastructure configurations can also be automated and checked using tools like terraform and kitchen-terraform. Additionally, images or containers should be pre-hardened with appropriate security tools integrated into the images or the hosting environment (such as sidecars for containers).

All of these tools and tests will generate output that must be fed into your issue tracker (e.g., JIRA, Pivotal) as natively as possible (for example, using ThreadFix to import SAST and DAST data). Automating dashboards and reporting is important for a couple reasons. First, plumbing this information into a “work as usual” workflow for dev and ops helps ensure issues are addressed in a timely fashion. Second, these dashboard provide an efficient, effective way to keep management informed. Third, and most relevant to this post, is that capturing all of this information in a friendly, accessible format will help put auditors’ minds at ease about meeting internal controls such as SoD.


In closing, I want to highlight that conflicts like the one described here between DevOps initiatives and security or audit can be reasonably addressed, but only if all parties are willing to have open, respectful, and mindful conversations. Our Lean Security approach puts a strong emphasis on creating these conditions, which helps elevate the level of professionalism across organizations while improving efficiency, effectiveness, and security. We’ll be writing a lot more about Lean Security in the coming weeks. We’re pretty excited about applying our insights on improving business management and organizational culture applying lessons learned from Lean, DevOps, TDD, and more. Lean Security: It’s not just what we do; it’s who we are.

By | 2017-02-28T22:27:31+00:00 January 26th, 2017|Blog|

About the Author:

Ben Tomhave is a security architect with New Context, a lean security firm. He holds a Master of Science in Engineering Management from The George Washington University and is a CISSP. He has previously held positions with Gartner, AOL, Wells Fargo, ICSA Labs, LockPath, and Ernst & Young. He is former co-chair of the American Bar Association Information Security Committee, a senior member of ISSA, former board member at large for the Society of Information Risk Analysts, and former board member for the OWASP NoVA chapter. He is a published author and an experienced public speaker, including speaking engagements with the RSA Conference, MISTI, ISSA, Secure360, RVAsec and RMISC, as well as Gartner events.