As a technical doctrine DevOps is largely about compressing and streamlining, the entire development chain; from thought-creation, to base coding, QA/Test, and finally, customer delivery. However, while the doctrine clearly makes sense as a singular management discipline, at a production level, the results of any positive momentum to date has largely come on the back of a fortunate meld between thought-leadership, human curiosity, and hosts of innovative toolsets.

As a happy coincidence, just as DevOps began to emerge beyond its position as a niche doctrine and toward enhanced engineering importance, new cloud-oriented tools appeared as well. Together, these confluent ‘chicken and/or egg’ events triggered a number of immediate enterprise software advancements by applying the DevOps rapid-production doctrine and its tendency toward engaging development communities at large, along with an ability to adopt a ready list of cloud-based tools.

Consequently, this intellectual momentum created more interest in ways to quickly produce, package and replicate previously disassociated code, and resultant processes. In turn, those precursor deliberations ultimately lead to what we know today as the micro services sector, driven by its core element the ‘container.’

In simple terms, a cloud container is a compartmentalized code package that is both self-sustaining, and can be chained as necessary to quickly create larger applications. General developmental characteristics include:

– Reduced maintenance: Because containers are self-sustaining, external process dependencies such as a need for recurrent updating, or patch implementation, are highly-limited.
– Marginal infrastructure requirements: Containers are self-sustaining, therefore, they do not require external overhead, process or utility support.
Enhanced flexibility: Containers are compliant with nearly all architectures including VM, physical, or cloud-based platforms.
– Security-adept: Again, containers are self-sustaining and compartmentalized. This means that security threat vectors are highly-limited, if not eliminated entirely in many cases.

From an enterprise DevOps perspective, then, the ‘container’ concept provides for the following direct advantages:

– Enhanced up-front functionality: The ‘container’ construct allows DevOps teams to define, chain, and leverage expanded functionality from the outset of a development project. This, in turn, reduces time and resource cost loads downstream.
– On-the-shelf toolsets: DevOps teams leverage a wide range of tools that allow for design, packaging and automation of nests of developmental processes, provide for continuous integration (CI), and finally, the proffer of continuous delivery (CD). These characteristics are applicable throughout, and whether they relate to the implementation or maintenance of a premise or cloud-based system. In the event, then, aforementioned ‘container’ engines include; Docker, Google Container Engine Kubernetes & Apache Mesos

Together these elements create two primary values to the enterprise operator:

– The ‘container’ concept allows for end-to-end product management by creating a ‘single-environment’ encompassing development, test/QA, staging, and production within a single operational envelope.
– In a more granular sense, this means that all elements can be created and delivered, anywhere, on any system, as desired. This includes all network types as well including; public, private-cloud, or on on-premise networks.

Practically-speaking, therefore, there are a number of ways that containers play a central role in today’s systems developments. Some of these include:

Micro-services: As previously mentioned the entire premise of the micro-services sector is driven by the ‘container’ construct. In the event, then, a user can execute and operate multiple ‘containers’ simultaneously such as web, and/or applications servers, run active message management, or parallel utility processes without disturbing any of these operations. Consequently, this approach is readily scalable, since not only can the process chain expand or contract on demand, but each ‘container’ can represent a discrete and isolated set of operational values. For example, Docker offers complete isolation, thereby creating an ability to spin up differently scaled processes in concert, without any concern about individual ‘container’ languages or libraries.

ETL and Batch processing: In this event, ‘containers’ are particularly useful for batched or ETL jobs by leveraging individual packages then launching them within a clustered systems matrix. In this case, the ‘container’ approach can even accept multiple job versions in isolation. This value also allows the user to share real-time cluster resources to ensure that cycle loads are balanced appropriately, thereby creating more efficient overall systems utilization, while reducing overall costs.

Maximum CI/CD operations: Most ‘container’ management systems like Docker provide for some form of image versioning. In this scenario, users can execute revised builds by first leveraging a code inventory, replicate elements located therein, then add change elements to create a secondary image. This result, in turn, can be immediately tested, and pushed forward to the user’s production servers.


Back To Insights