Consumer Products Company Moves to Native AWS DevOps
This large multinational technology services firm was developing and hosting hundreds of applications deployed to half a dozen Kubernetes clusters that are managed with Helm. These clusters and the related Jenkins pipelines had grown significantly over the years. The IT department was anxious to move from Jenkins/ Kubernetes deployments to cloud-native deployment pipelines and managed, scalable hosting solutions from AWS. They felt that the Kubernetes learning curve was too steep, and they no longer wanted to maintain the infrastructure required to support their self-managed DevOps architecture. At the same time, they needed help designing a secure, scalable and repeatable cloud architecture for applications. Enter Winmill as a trusted advisor and implementor for this transition to the brave new world of cloud-native, serverless, managed services in a native AWS DevOps architecture.
- Winmill was tasked with leveraging native AWS DevOps capabilities in an architecture that would require a minimum number of rebuilds and deployments during code changes. The architecture had to allow for native cloud development and iterative deployment while maintaining a completely locked-down and secure hosting environment. The pilot application for this effort used a microservice based architecture. It was their largest and most complex microservice architecture to date.
- The transition had to be smooth and the existing Kubernetes-based DevOps architecture needed to be maintained while the new pipelines were created.
- The new DevOps architecture had to include pipelines to build and deploy each microservice individually. The source code for each microservice had to be housed in a single git repository due to shared libraries and interdependent service build definitions. Each microservice’s codebase was housed in an individual directory within the repository. However, AWS native DevOps tools can’t trigger individual build pipelinesbased on a change to a directory in a repository. The delivered solution had to overcome this limitation using native AWS tools while minimizing costs and maximizing reliability.
- Winmill’s solution had to be reliable, re-usable and simple to maintain.
- Winmill was able to design and implement an architecture that met all the client’s requirements by combining our extensive development, DevOps and Cloud Architecture expertise. Our solution extends AWS’s native capabilities by leveraging a serverless lambda function which is triggered by a JSON POST from GitLab. The lambda function parses the JSON payload to determine in which directory a commit was made, and therefore which microservice build and deployment pipeline needs to be triggered. The lambda function is externally configurable via a Dynamo DB instance as well as a set of runtime environment variables. This approach allows the code running in the lambda function to be reused for any future project with a microservice backend, without ever having to be rebuilt.
- The build and deployment pipelines are built using AWS CodePipeline and deployed to scalable Beanstalk instances, which in turn ran on a private network behind a DMZ within the cloud architecture. The microservices are only accessible from the outside world via the AWS API Gateway managed service. The connections from the gateway to the services are made via VPC link to avoid exposing the microservice deployments to the internet.
- Winmill was able to implement this new architecture while simultaneously maintaining the legacy Jenkins/Helm/Kubernetes pipelines and production clusters.
- The client has found the new AWS Cloud native architecture significantly easier to maintain. They are able to devote far less time to troubleshooting Helm charts and digging into obscure Kubernetes or Docker configuration issues.
- This decreased learning curve has led to higher Developer adoption and engagement, since they can more easily see how the pipelines work and how the hosting environment is configured. The AWS Console is far more user-friendly than Jenkins configuration Helm charts and Kubernetes cluster CLIs.
- The client has also found the new architecture to be more reliable than their home-grown Kubernetes clusters and Jenkins pipelines. They went from multiple mysterious Jenkins outages and deployment failures to smooth builds with reliable deployments. Much of this improvement in reliability is due to the fully serverless nature of the DevOps support infrastructure. The only servers to worry about in the entire DevOps architecture are the deployment instances being managed by AWS Beanstalk.
- This simplified, serverless DevOps architecture is also more secure since the entire hosting infrastructure is now behind a DMZ on a private network, protected by the App Gateway and AWS WAFs, with no public IPs required for any server running a deployed microservice. The DevOps support infrastructure is no longer a potential attack vector since there are no longer servers to attack.
- The Beanstalk deployment environments are now effectively infinitely scalable, and only bound by the sizing and instance count restrictions the client wished to impose. Previously, the cluster size for any given service was bound by the fixed capacity of the Kubernetes cluster.
- Because all the architecture components are native to AWS, the entire deployment is integrated into the AWS service ecosystem. Logs and alerting are handled by AWS CloudWatch and streamed to the AWS Elastic Search service; also load balancers are now native to AWS and can be directly linked to other AWS services, without having to give them public IPs and expose them beyond the DMZ. And the client will be ready to take advantage of new services and integration opportunities as they are made available by AWS.
The client was so happy with the first pilot architecture that they directed every new project with a microservice backend to use the new DevOps architecture. Eventually 90% of all legacy Jenkins/Kubernetes projects were either transitioned to this new architecture or were retired entirely. We were able to drastically reduce the number and size of their Kubernetes clusters.