You can use the flexibility of helm values to set default configurations, just overriding the configs that differ from an environment to another. That disincentivizes continuous deployment.Ĭombine a powerful CI/CD tool with helm. It's a good practice to keep development, staging, and production as similar as possible:ĭifferences between backing services mean that tiny incompatibilitiesĬrop up, causing code that worked and passed tests in development or 1 cluster for DEV and STAGING (separated by namespaces, maybe even isolated, using Network Policies, like in Calico).Reduce setup, maintenance and administration overheadĬonsidering a not too expensive environment, with average maintenance, and yet still ensuring security isolation for production applications, I would recommend:.Cloud/on-prem: to split the load between on-premise services.Compliance: according to some regulations some applications must run in separate clusters/separate VPNs.Separation of production/development/test: especially for testing a new version of Kubernetes, of a service mesh, of other cluster software.I'd like to highlight some of the pros/cons: Take a look at this blog post from Vadim Eisenberg ( IBM / Istio): Checklist: pros and cons of using multiple Kubernetes clusters, and how to distribute workloads between them. There might be some other concerns, so I'm reaching out to the K8s community on StackOverflow to have a better understanding of how people are dealing with these sort of challenges. How does one make sure that a high load in the staging environment won't cause a loss of performance in the production environment?.How does one make sure that a human mistake might impact the production environment?.(2) Looks like it simplifies infrastructure and deployment management because there is one single cluster but it raises a few questions like: However, this comes with the cost of more master machines and also the cost of more infrastructure management. (1) Seems the safest options since it minimizes the risks of potential human mistake and machine failures, that could put the production environment in danger. Use only one K8s cluster and keep them in different namespaces.So, assuming the team is using Kubernetes, what would be a good practice to host these environments? This far we've considered two options: Should contain stable and well-tested features. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |