When considering the invasive nature of today’s business technologies, it’s interesting to see large businesses beginning to try and respond like small ones. After all, large-scale enterprises tend to foster internal bureaucracies that constrain operational responses ranging from “vanilla” administrivia all the way down to the warehouse loading dock.

However, emergent systems platforms and supporting methodologies have begun to alter the legacy view of things. In the event two of the more useful elements apply in the form of the Amazon Web Service (AWS), and DevOps as a parallel process management methodology.

As a cloud-driven business value, AWS represents one of the easiest and most flexible ways for any enterprise to initiate all-scale operations. The platform’s reach extends across 12 international regions, and as a SaaS premium, it also offers an extended list of internal offerings ranging from; computing, static/active storage, networking, database management, data analytics, and mobility to a clutch of Internet of Things (IoT) toolsets.

On top of these finite service values, AWS also provides for systems deployment and management services, plus the proffer of a comprehensive list of developer tools. Consequently, AWS’ business value is largely driven by the potential of rapid, on-demand systems deployment leveraged by an entirely homogenous network/services infrastructure.

However, as one might expect, the AWS cloud doesn’t do any of the heavy intellectual liftings – people do, which is, of course, where the employment of DevOps consulting and DevOps consultants apply directly. Any enterprise that needs the quick production of highly-scalable applications should consider committing to the discipline,

Since DevOps is driven by the premise of streamlining the end-to-end production chain, it mates nicely with AWS’ rapid-deployment model. The pairing of these two elements makes for perfect solutions oriented to “on the go” app development, along with ensuring that quality assurance confidence is built-in rather than “tacked on.”

To gain a sense of just how quickly a DevOps installation can evolve, here’s a case scenario leveraging a number of AWS toolsets as executed by a DevOps team. Pay attention to the overall speed of completion for each project step.

Case study – online shopping network:

The specifics of the business scenario involve the up-scaling of an existent online shopping portal. Various AWS toolsets are engaged, including; an AWS Ansible Tower housed within Amazon’s Elastic Compute Cloud (EC2), a Jenkins load-balancing layer, a CloudFormation layer, and a series of automated Packer scripts.

These toolsets are managed by professional DevOps consultants in order to establish an overall project feasibility and assessment analysis, followed by a detailed resources investigation; the development and creation of a formal transition plan, followed by a preparatory phase, referred to as the “Project initiation.”

Here’s how the scenario’s pro forma transition schedule lays out:

Project initiation – This phase allows the DevOps team to establish a baseline set of resources for use throughout the project. Steps in this task include:

  • The initiation of AWS production, QA, and development environments
  • Define AWS environments using best practices clustered around all VPC’s
  • Define and initiate three private subnets and multiple public subnets across multiple regions
  • Establish proper routing and gateways
  • Establish a central framework for VPN connectivity
  • Establish SSH Bastion boxes
  • Establish firewalls
  • Define and establish IAM policies per necessary VPC segmentation
  • Define and establish a framework for Flow Log logging within each VPC
  • Establish a load-balance framework for sites, internal and external facing

VPN – Client VPC’s are integrated with a client core-router. This structure creates a centrally connective bindery to the Amazon layer. Jenkins operates as a primary private IP address, ensuring that no activity across public load balancing and/or public IP addresses will be triggered unless absolutely necessary. The model also creates an intrinsically secure infrastructure that creates minimal security group overhead.

Ansible Tower – This element provides a solid foundation when automating common tasks across existing client infrastructures. The method also directly supports the premise of scalable frameworks, engaging multiple zones as necessary. The goal is to trigger resource testing branches while allowing the DevOps team to initiate and stage proper toolsets while maintaining necessary infrastructure insights.

Deep Dive – This research effort validates each element/resource that the external team identifies. The approach drives engagement between DevOps project leaders; client developers; and domain owners, oriented to the creation of a complete understanding of the client’s structure.

Jenkins – The task calls for the emplacement of a Jenkins cluster within multiple VPC’s. Resources engagement is employed throughout and serves to create jobs necessary to provision and deploy previously staged resources.

CloudFormation – As a central operational framework, this stack-management layer serves as the central structural, primary/subordinate jobs scheduler, and comprehensive test environment. The toolset will be engaged by the DevOps team to physically affect and validate the project’s overall operational requirements.

Transition – This management task is driven by the DevOps team leader and serves as the client’s point of contact when engaged in final implementation and project launch.

Based on accepted DevOps project consumptions, these task elements resolve up to a total of 5.1 working months; versus 20.4 working months when applied as a legacy dev schedule.

This metric representation offers a clear understanding of the significance of the “AWS/DevOps Value”; and why this particular technical paring is causing a seminal change in just how globally-relevant business is being conducted.

** Based on our expertise as a Top DevOps Consulting Firm and speaking with a Legacy Development expert with 40 years of experience, we came up with the metric that by removing the bureaucratic blocks between Ops, Dev, and QA alone, should save time/motion to the tune of 4:1. So, a typical 4-month legacy project would devolve to 1 month.


Back To Insights