By Paul Bradley, Vice President of Technology
Implementing DevOps is a journey. For most large organizations, it’s a multi-year process reflecting failures, misfires and successes. DevOps is especially challenging in federal agencies where regulatory compliance issues and acquisition restrictions abound. However, experience shows that using organizational change management methods, rather than technology-focused initiatives, to drive DevOps implementation yields the most successful outcomes.
Obvious DevOps success factors include creating awareness, providing education, delivering quick wins, establishing management support and assuring constant collaboration among key stakeholders. Most of these success factors have been written about extensively elsewhere. This article will discuss the critical success factors AbleVets has uncovered during our DevOps journey.
Less IS MORE
It’s vital to create a DevOps playbook with preferred (and hardened) architectures, stacks, design patterns and preconfigured machine images to provide ample resources to developers while being mindful of security and avoiding unnecessary technology sprawl. To the maximum extent possible, product teams must be incentivized to leverage these preferred and hardened resources. It goes without saying that DevOps and cloud technologies will require new tools to keep pace, but limit additions through appropriate business justification, assessment and authorization processes. Harden any new tools quickly and promote them; retire old solutions with haste.
Promote resources with compelling use cases, financial justifications and self-servicing usage guides. For example, consistent with AWS best practice, strive to establish standardized, hardened machine images for cloud infrastructure and microservice containers with consistent configuration; security patching; and agents for logging, security and performance monitoring.
Employing standardized configurations will reduce service fees, lower operating cost, improve security and yield higher reliability. More important, user experiences will improve as design teams will have focused their efforts on creating value rather than on recreating and hardening previously supplied resources.
Move security to the left
Just as Agile and test-driven development moved functional and nonfunctional testing from a post-development activity to an integral part of each development iteration, you must do the same with security. Security experts will quickly opine that their expert guidance and oversight cannot be automated. There is no question that advice, guidance and oversight will always be required. However, all common and repetitive tasks must be viewed as candidates for automation.
Engage and collaborate with security experts to explore the tools and techniques needed to automate and integrate security testing, including vulnerability analysis and penetration testing, within the continuous delivery process. Doing so will not only foster security-by-design techniques but also encourage outcome-focused collaboration among development, security and IT operations teams. In addition, it will elevate behaviors from a compliance focus to a holistic model that includes vulnerability and threat analysis as well as risk management.
Notably, across the industry less than half of DevOps practices have sufficiently integrated security teams into the delivery process. As a result, technical (security) debt is expanding more rapidly than some organizations realize and hackers are increasingly targeting DevOps environments. Recent attacks on misconfigured AWS security settings, leaked s3 buckets, leaked API keys and exploitation of CI/CD tools serve as compelling examples.
Integrate monitoring and analysis
DevOps is about speed – delivering value and reducing risk quickly. Value includes functionality, quality, resiliency, security and user experience.
Methods and tools must include those offering real-time visibility into value delivery and risk. Application Performance Monitoring is a must, but more is required. Product and service managers must have real-time, automated access to security, reliability, resiliency, compliance and incident management performance data. Such data are necessary for continuous improvement as it allows teams to understand what happened (descriptive analytics), why it happened (diagnostic analytics), what is likely to happen next (predictive analytics) and what should be done about it (prescriptive analytics).
Analytics data must be a core capability of your Continuous Delivery Platform, and your DevOps business architecture must support this level of ubiquitous visibility.
Measure DevOps maturity
While this recommendation could have been lumped into the previous section, it’s important to separate its primary objective here: Provide an evidence-based method to assess adoption and maturation of DevOps. Service-Level-Agreements (SLA) established for products and services can be the top-level measure of DevOps maturity. SLA key performance indicators can include mean-time-to-detect, mean-time-to-failure, mean-time-between-failures and mean-time-to-resolve. In addition, survey and system data can be used to measure DevOps maturity.
Survey data provide qualitative and quantitative information on perceptions, attitudes, organizational impediments and human factors that enable or obstruct DevOps maturation. When designed properly, survey data can be a leading indicator of downstream performance issues.
System data offer a quantitative view into process maturity by assessing tool chains, test automation levels, code deployment methods, delivery rates, defect rates, user experience and overall quality. It’s vital to determine the most relevant metrics in light of the organization’s objectives and assure that your Continuous Delivery Platform can collect these metrics. Remember, what you measure is what you get. Choose wisely.
Continuously incentivize DevOps practitioners – and all involved in solution delivery – to increase automation. Automate system configurations, provisioning, security policy, incident responses, continuous delivery, functional and nonfunctional testing and security testing. Anywhere you can remove human error and human inefficiency from the process, do so. Automation does not displace human labor as much as it redirects it toward complementary activities that increase productivity.
DevOps transformation can take many paths. Organizations can choose to be systematic about how they evolve, or not. Those that achieve the highest levels of DevOps evolution don’t do so by accident.