The first computer program dates back to 1843 and was an algorithm intended to calculate a sequence of Bernoulli numbers. It wasn't until 1949 that both computer programs and their data were able to be stored in computer memory. It took almost a century for the hardware technology to provide a mature enough platform to effectively support computer programming. Computer hardware without software is just a bunch of electronics, whereas software without hardware is text or binary data which cannot be interpreted. They are dependent on each other to provide a usable product. This is an important distinction to make between hardware and software when it comes to creating and running computer programs.
When a software developer writes a new program, it needs to run on a piece of hardware. On a small scale, the developer can also be the person maintaining the hardware e.g. a hobbyist programmer writing some code at home.
When things start to scale, let's say for a small business, the software developer can keep the few on-premise servers updated and install the new software versions every time there is an update. However, once the business grows, a single person can no longer keep the software up-to-date. Not only is the deployment process becoming increasingly complex but also when there is any kind of outage, the software developer could fall into the trap of debugging or implementing hotfixes directly on the production system. This makes deploying new changes and maintaining an operational computer system difficult, especially to repeat safely and consistently.
In larger organizations, IT operations were introduced to take over the deployment and maintenance aspects of the system. This gave the software developers some breathing room to focus entirely on writing and creating quality software. They would hand over new versions of the software to the IT Operations, which would then deploy and maintain the new software on the organization's infrastructure.
The separation of Development and IT Operations was a great answer to solving the scalability and maintainability challenge of an organization's IT infrastructure. However, as these systems became more complex, the separation also experienced its limitations when it came to providing a high level of throughput (new software) and stability (mean time to recover). Proposals to address this limitation in the form of merging development and operations methodologies already started in the late 80s to 90s and by 2009 the first DevOps conference was held.
Many enterprises, which still have IT operations and software development separated, only release new software quarterly, yearly, or even once in two years due to the significant amount of manual effort and collaboration required during such a release process. Enterprises employing DevOps principles can easily deploy new features multiple times a day.
Let's use Amazon's Continuous improvement and software automation story as a case study. Amazon realized that it took approximately 14 days to go from idea to production (which is already fast compared to other companies) of which only about 1 to 2 days were spent on turning the idea into peer-reviewed code. They invested in improving their Continous Integration / Continuous Delivery (CI/CD) pipeline and they achieved a staggering 90% reduction in time from check-in to production.
The adoption of using this automated pipeline within Amazon increased when people saw the successes of the teams which were starting to use it. The adoption within diverse teams also led to various teams finding their own solutions to common problems experienced by other teams as well. Amazon learned that building knowledge messages into their deployment tools helped share best practices with the users of these tools. This in turn enabled higher adoption of best practices.
Another big lesson learned with the increased adoption was that automating a release process to increase the frequency of releases wasn't the only important performance metric. Stability was also important and mitigating the risk of introducing bugs into production required sufficient testing at all stages of the deployment. Their CI/CD pipeline incorporated testing through the following stages:
These testing processes mitigate risk, but it is also important to have absolute control of the actual release process. This is done through significant monitoring of various metrics and control options. The control options provide a risk barrier to moving from development to production. The controlled release is where the development cycle transitions to the operational cycle.
The DevOps "infinity" cycle integrates and visualizes the Development and the IT Operations cycle. Referencing this cycle is a great way to think about this form of a systems development lifecycle.
The cycle is explained on a high level as follows:
Various tools are made specifically for each step in the process. Some even have the functionality for the entire cycle. Here is a very short list to give you an idea:
You might be thinking, I understand the DevOps process but at our company, we don't use most or any of the tools above and we can therefore not go "full DevOps". That is where the DevOps mindset comes in.
There are so many companies and people using software, that it is impossible to have a one-size-fits-all approach. Newer companies/products/projects can adopt DevOps practices from the start, but:
It starts with the development of a DevOps culture and this will certainly take time. It took Amazon almost a decade to get this right and they are still refining the process today. A way to start working on a DevOps culture is to show the benefits to the organization, start small and lay down key principles such as the four key principles employed by GitLab:
The benefit of laying down key principles is that it can guide Development and IT Operations teams in forming a DevOps practice that works within your organization and in your context. It is important that architectural decisions for development and operations consider automation and prepare for a streamlined approach. In the end, it should be a team effort.
Software needs to run on hardware, which becomes complex at scale. The evolution of maintaining the software and operational hardware systems led to the formation of separate development and IT operations teams. This dividing line between the operational systems and the development processes solved the scaling problem, but experienced challenges when it came to faster delivery of new features or bug fixes and recovery from failures. DevOps provided a solution to this by enabling scalability, faster delivery, and recovery.
The DevOps methodology aims to optimize the systems development life cycle by integrating software development (Dev) and IT operations (Ops) work. It focuses on:
DevOps has significant advantages for an organization but ultimately needs to be adopted through the development of a DevOps mindset (culture).
Feel free to comment below or get in touch with me through the contact form.
Husband, Lifelong-Learner, Team-Player & Systems Engineer