How did we get to DevOps?
Jan Gabriel

Jan Gabriel

Apr 27, 2023
8 min read

How did we get to DevOps?

# The evolution toward DevOps

The first computer program dates back to 1843 and was an algorithm intended to calculate a sequence of Bernoulli numbers. It wasn't until 1949 that both computer programs and their data were able to be stored in computer memory. It took almost a century for the hardware technology to provide a mature enough platform to effectively support computer programming. Computer hardware without software is just a bunch of electronics, whereas software without hardware is text or binary data which cannot be interpreted. They are dependent on each other to provide a usable product. This is an important distinction to make between hardware and software when it comes to creating and running computer programs.

When a software developer writes a new program, it needs to run on a piece of hardware. On a small scale, the developer can also be the person maintaining the hardware e.g. a hobbyist programmer writing some code at home.

When things start to scale, let's say for a small business, the software developer can keep the few on-premise servers updated and install the new software versions every time there is an update. However, once the business grows, a single person can no longer keep the software up-to-date. Not only is the deployment process becoming increasingly complex but also when there is any kind of outage, the software developer could fall into the trap of debugging or implementing hotfixes directly on the production system. This makes deploying new changes and maintaining an operational computer system difficult, especially to repeat safely and consistently.

In larger organizations, IT operations were introduced to take over the deployment and maintenance aspects of the system. This gave the software developers some breathing room to focus entirely on writing and creating quality software. They would hand over new versions of the software to the IT Operations, which would then deploy and maintain the new software on the organization's infrastructure.

The separation of Development and IT Operations was a great answer to solving the scalability and maintainability challenge of an organization's IT infrastructure. However, as these systems became more complex, the separation also experienced its limitations when it came to providing a high level of throughput (new software) and stability (mean time to recover). Proposals to address this limitation in the form of merging development and operations methodologies already started in the late 80s to 90s and by 2009 the first DevOps conference was held.

# The DevOps methodology

The DevOps methodology aims to optimize the systems development life cycle by integrating software development (Dev) and IT operations (Ops) work.

Many enterprises, which still have IT operations and software development separated, only release new software quarterly, yearly, or even once in two years due to the significant amount of manual effort and collaboration required during such a release process. Enterprises employing DevOps principles can easily deploy new features multiple times a day.

# The Amazon DevOps story

Let's use Amazon's Continuous improvement and software automation story as a case study. Amazon realized that it took approximately 14 days to go from idea to production (which is already fast compared to other companies) of which only about 1 to 2 days were spent on turning the idea into peer-reviewed code. They invested in improving their Continous Integration / Continuous Delivery (CI/CD) pipeline and they achieved a staggering 90% reduction in time from check-in to production.

The adoption of using this automated pipeline within Amazon increased when people saw the successes of the teams which were starting to use it. The adoption within diverse teams also led to various teams finding their own solutions to common problems experienced by other teams as well. Amazon learned that building knowledge messages into their deployment tools helped share best practices with the users of these tools. This in turn enabled higher adoption of best practices.

Another big lesson learned with the increased adoption was that automating a release process to increase the frequency of releases wasn't the only important performance metric. Stability was also important and mitigating the risk of introducing bugs into production required sufficient testing at all stages of the deployment. Their CI/CD pipeline incorporated testing through the following stages:

  • Unit testing on the build machine, including style checks, code coverage, etc.
  • Integration testing which includes automated testing with external interfaces e.g. with browsers, failure injections, security checks, etc.
  • Pre-production testing which tests the artifact in the operational environment. These are technical tests that verify that the service can connect to all production services, but at this point, the artifacts are not yet exposed to production inbound traffic.
  • Validation in production means that the new software is released to only a few customers and based on validation checks may then be rolled out to all customers or rolled back.

These testing processes mitigate risk, but it is also important to have absolute control of the actual release process. This is done through significant monitoring of various metrics and control options. The control options provide a risk barrier to moving from development to production. The controlled release is where the development cycle transitions to the operational cycle.

# The DevOps cycle

The DevOps "infinity" cycle integrates and visualizes the Development and the IT Operations cycle. Referencing this cycle is a great way to think about this form of a systems development lifecycle.

The cycle is explained on a high level as follows:

  • Based on feedback from operations one can plan for new releases.
  • The plans or ideas are converted to code by a development team.
  • The new code is built on a build machine and creates an artifact. This is also where unit testing is done.
  • The artifact is then submitted for automated integration testing.
  • After successful testing, the artifact can be scheduled for release into the production environment using the appropriate "risk barrier" controls.
  • Once the artifact is submitted for release in the production environment. The released artifact is deployed based on the system's infrastructure configuration.
  • The artifact is now operational and can deliver services to its users.
  • The operational service must be monitored. This monitoring provides IT Operations with the necessary input for feedback to the development team. The feedback is then input for the planning of new features and/or bug fixes.

Various tools are made specifically for each step in the process. Some even have the functionality for the entire cycle. Here is a very short list to give you an idea:

You might be thinking, I understand the DevOps process but at our company, we don't use most or any of the tools above and we can therefore not go "full DevOps". That is where the DevOps mindset comes in.

# The DevOps mindset

There are so many companies and people using software, that it is impossible to have a one-size-fits-all approach. Newer companies/products/projects can adopt DevOps practices from the start, but:

  • how do you change your company's legacy infrastructure and processes with the ability to produce only an annual software release to a monthly, weekly, or even daily release?
  • How do you ensure reproducibility, and stability and be resilient against team changes?

It starts with the development of a DevOps culture and this will certainly take time. It took Amazon almost a decade to get this right and they are still refining the process today. A way to start working on a DevOps culture is to show the benefits to the organization, start small and lay down key principles such as the four key principles employed by GitLab:

  1. Automation of the software development lifecycle
  2. Collaboration and communication
  3. Continuous improvement and minimization of waste
  4. Hyperfocus on user needs with short feedback loops.

The benefit of laying down key principles is that it can guide Development and IT Operations teams in forming a DevOps practice that works within your organization and in your context. It is important that architectural decisions for development and operations consider automation and prepare for a streamlined approach. In the end, it should be a team effort.

# Summary

Software needs to run on hardware, which becomes complex at scale. The evolution of maintaining the software and operational hardware systems led to the formation of separate development and IT operations teams. This dividing line between the operational systems and the development processes solved the scaling problem, but experienced challenges when it came to faster delivery of new features or bug fixes and recovery from failures. DevOps provided a solution to this by enabling scalability, faster delivery, and recovery.

The DevOps methodology aims to optimize the systems development life cycle by integrating software development (Dev) and IT operations (Ops) work. It focuses on:

  • automation of manual tasks,
  • mitigating risk through automated testing,
  • implementing release controls,
  • reproducible (configurable) infrastructure,
  • and feedback loops from Ops to Dev based on the monitoring results.

DevOps has significant advantages for an organization but ultimately needs to be adopted through the development of a DevOps mindset (culture).

Feel free to comment below or get in touch with me through the contact form.

Jan Gabriel

Jan Gabriel

Husband, Lifelong-Learner, Team-Player & Systems Engineer

Leave a Comment

Related Posts

Categories