Context.
Manatal is a leading AI Recruitment platform that helps recruiters source and hire candidates faster. Manatal is used by 10,000+ recruitment teams across 135+ countries and is trusted by many Fortune 500 companies around the globe.
Manatal tech stack consists of multiple single-page applications, Python-based backend services hosted on AWS Elastic Kubernetes Service (recently migrated from Heroku Platform as a Service), and relational databases hosted on AWS RDS, among other AWS services.
Problem Statement.
Set up Heroku PaaS style, ephemeral Preview environments for all frontend and backend services on Kubernetes
Set up Heroku PaaS style, ephemeral Preview environments for all frontend and backend services on Kubernetes
Set up Heroku PaaS style, ephemeral Preview environments for all frontend and backend services on Kubernetes
Set up Heroku PaaS style, ephemeral Preview environments for all frontend and backend services on Kubernetes
Allow a Preview service to use another service in Preview mode as a dependency
Allow a Preview service to use another service in Preview mode as a dependency
Allow a Preview service to use another service in Preview mode as a dependency
Allow a Preview service to use another service in Preview mode as a dependency
Configure the latest migrations and seed data in Preview environments, reducing the time to test a Pull Request
Configure the latest migrations and seed data in Preview environments, reducing the time to test a Pull Request
Configure the latest migrations and seed data in Preview environments, reducing the time to test a Pull Request
Configure the latest migrations and seed data in Preview environments, reducing the time to test a Pull Request
Allow automated deletion of the Preview environment if it’s not actively used
Allow automated deletion of the Preview environment if it’s not actively used
Allow automated deletion of the Preview environment if it’s not actively used
Allow automated deletion of the Preview environment if it’s not actively used
Outcome/Impact.
Automated Preview environments based on GitHub Pull Request with a specific label
Automated Preview environments based on GitHub Pull Request with a specific label
Automated Preview environments based on GitHub Pull Request with a specific label
Automated Preview environments based on GitHub Pull Request with a specific label
Use existing CI/CD process instead of introducing new tools to achieve faster developer adoption
Use existing CI/CD process instead of introducing new tools to achieve faster developer adoption
Use existing CI/CD process instead of introducing new tools to achieve faster developer adoption
Use existing CI/CD process instead of introducing new tools to achieve faster developer adoption
Save costs by cleaning up the Preview environment when it’s not actively being used
Save costs by cleaning up the Preview environment when it’s not actively being used
Save costs by cleaning up the Preview environment when it’s not actively being used
Save costs by cleaning up the Preview environment when it’s not actively being used
Integrate with GitHub Deployments and Jira to provide engineering velocity reports
Integrate with GitHub Deployments and Jira to provide engineering velocity reports
Integrate with GitHub Deployments and Jira to provide engineering velocity reports
Integrate with GitHub Deployments and Jira to provide engineering velocity reports
Solution.
The Manatal team had just recently migrated from Heroku to AWS Elastic Kubernetes Service (EKS) when they approached One2N. In the Kubernetes setup, developers were missing a key feature of Heroku - “Review Apps based on Pull Requests”.
Review Apps run the code in any GitHub Pull Request in a complete, disposable Heroku app. Review Apps each have a unique URL you can share, making them a great way to propose, test, and merge changes to your code base.
—Heroku Documentation
The Manatal team wanted to have the same experience with Kubernetes so that the developers don’t have major changes in their workflow. Since Manatal was already using ArgoCD for GitOps, we decided to use ArgoCD for Preview environments too. That way, the team doesn’t need to learn yet another Kubernetes tool. We used ArgoCD’s ApplicationSet and PullRequest generator feature to achieve the ephemeral Preview environments.
We implemented the workflow as below.
Workflow for Ephemeral Preview Environments on Kubernetes
Workflow for Ephemeral Preview Environments on Kubernetes
Here are some details about the workflow.
The developer creates a Pull Request(PR) and attaches a "preview" label to it. This label is customizable.
PR creation triggers the Continuous Integration build process using GitHub Actions. A Docker image artifact is created and pushed to the AWS Elastic Container Registry.
The ArgoCD ApplicationSet controller detects the PR and triggers the application deployment in a separate namespace.
In this namespace, ArgoCD provisions the service-specific Kubernetes resources such as - Ingress, Service, and Deployment.
The developers and QA team can access the Preview environment via the Ingress URL.
Once the PR is closed/merged, or the "preview" label is removed, ArgoCD automatically removes all the resources of the Preview environment.
ArgoCD provides a general framework for creating Preview environments. However, here are some challenges we had to solve when building Preview environments on top of ArgoCD.
Dependency Management for Services
Seed Data Management
Keeping costs low for Preview environments
Dependency Management for Services
For frontend services, setting up Preview environments is easy. For backend services, we have to also set up its dependent components such as database, queue, cache, and other backend services.
For this, we chose a hybrid approach where the database, queue, and cache were provisioned specifically for the Preview environment in the same namespace. The dependent backend services were used from the shared Staging environment. This allowed us to save the resources cost on shared backend services yet not share the database, queue, and cache.
We introduced configuration overrides to solve an interesting scenario. Consider two services - Service A and Service B, where Service A depends on Service B. A developer working on a Pull Request of Service A, can use Service B either from the Staging environment or from any Preview environment of Service B. We enabled developers to point to any version of Service B (either Staging or another Preview environment) by changing the configuration in ArgoCD.
Seed Data Management
Each backend service in the Preview environment has its own newly created database. To test the service in Preview environment, the team had to create and update database records. This was time-consuming. We solved this by creating customized Docker images with pre-seeded data. We also made it easy for developers to update the seed data by replacing a dump file in a Git repository. This allowed us to version control the seed data as well. With this, the team would not have to start from scratch, effectively reducing the time to test the Preview environment.
Keeping costs low for Preview environments
As Preview environments are created for every Pull Request with a specific label, it can quickly grow into many environments being provisioned. To ensure this doesn’t add up to too much infrastructure cost, we suggested running the Preview environments using spot instances in Kubernetes. Spot instances are perfect for these non-critical workloads, like Preview environments.
If a Pull Request doesn’t have any new activity (commits, comments, etc.) for a certain period of time, the preview label will automatically be removed. This, in turn, would delete the Preview environment to save costs.
In summary, we were able to roll out Preview environments for all of Manatal’s services (both frontend and backend) on their existing tech stack. All this work was carried out by just one senior engineer in less than 2 months.
Here are some details about the workflow.
The developer creates a Pull Request(PR) and attaches a "preview" label to it. This label is customizable.
PR creation triggers the Continuous Integration build process using GitHub Actions. A Docker image artifact is created and pushed to the AWS Elastic Container Registry.
The ArgoCD ApplicationSet controller detects the PR and triggers the application deployment in a separate namespace.
In this namespace, ArgoCD provisions the service-specific Kubernetes resources such as - Ingress, Service, and Deployment.
The developers and QA team can access the Preview environment via the Ingress URL.
Once the PR is closed/merged, or the "preview" label is removed, ArgoCD automatically removes all the resources of the Preview environment.
ArgoCD provides a general framework for creating Preview environments. However, here are some challenges we had to solve when building Preview environments on top of ArgoCD.
Dependency Management for Services
Seed Data Management
Keeping costs low for Preview environments
Dependency Management for Services
For frontend services, setting up Preview environments is easy. For backend services, we have to also set up its dependent components such as database, queue, cache, and other backend services.
For this, we chose a hybrid approach where the database, queue, and cache were provisioned specifically for the Preview environment in the same namespace. The dependent backend services were used from the shared Staging environment. This allowed us to save the resources cost on shared backend services yet not share the database, queue, and cache.
We introduced configuration overrides to solve an interesting scenario. Consider two services - Service A and Service B, where Service A depends on Service B. A developer working on a Pull Request of Service A, can use Service B either from the Staging environment or from any Preview environment of Service B. We enabled developers to point to any version of Service B (either Staging or another Preview environment) by changing the configuration in ArgoCD.
Seed Data Management
Each backend service in the Preview environment has its own newly created database. To test the service in Preview environment, the team had to create and update database records. This was time-consuming. We solved this by creating customized Docker images with pre-seeded data. We also made it easy for developers to update the seed data by replacing a dump file in a Git repository. This allowed us to version control the seed data as well. With this, the team would not have to start from scratch, effectively reducing the time to test the Preview environment.
Keeping costs low for Preview environments
As Preview environments are created for every Pull Request with a specific label, it can quickly grow into many environments being provisioned. To ensure this doesn’t add up to too much infrastructure cost, we suggested running the Preview environments using spot instances in Kubernetes. Spot instances are perfect for these non-critical workloads, like Preview environments.
If a Pull Request doesn’t have any new activity (commits, comments, etc.) for a certain period of time, the preview label will automatically be removed. This, in turn, would delete the Preview environment to save costs.