Skip to content

Kubernetes (K8S) & docker services

See this video

NOTE:

  • Containers are in pods and are not publicly routable
  • rancher good dashboard
  • rancher for operations

  • helm is for k8s as npm (CLI) is to node (package manager)

  • coundle of yaml files for deployment is called helm chars

HTTP Load balancer

The load balancer attempts to evenly distribute the workload among multiple instances (either stand-alone or clustered), thereby increasing the overall throughput of the system.

Add config to your kubectl

Export kubeconfig env variable, get the file from cloud provider:

export KUBECONFIG=~/.kube/my-demo-cluster-kubeconfig.yaml

Get pods:

kubectl get pods

Docker app

Create demo repo:

aws ecr create-repository --repository-name your/services-demo --image-scanning-configuration scanOnPush=true --region eu-west-1

In app get the secret and store it as ecr-secret

$ kubectl create secret docker-registry ecr-secret \
  --docker-server=<subdomain>.dkr.ecr.eu-west-1.amazonaws.com \
  --docker-username=AWS \
  --docker-password=$(aws ecr get-login-password --region eu-west-1)

Get secrets

kubectl get secrets

Describe secret

kubectl describe secret ecr-secret

Decode secret

kubectl get secret ecr-secret -o jsonpath='{.data}'

Edit secret

kubectl edit secrets NAME

Delete secret

kubectl delete secret ecr-secret

Add pod file

apiVersion: v1
kind: Pod
metadata:
  name: ellie-pod
  labels:
    app: ellie
spec:
  containers:
  - name: ellie-container
    image: <your-subdomain>.dkr.ecr.eu-west-1.amazonaws.com/your/services-demo:ellie-service-1.0.0
    ports:
    - containerPort: 3000
      name: http-port
  imagePullSecrets:
  - name: ecr-secret

Build the image

docker build . -t <your-subdomain>.dkr.ecr.eu-west-1.amazonaws.com/your/services-demo:ellie-service-1.0.0

Push the image

docker push <your-subdomain>.dkr.ecr.eu-west-1.amazonaws.com/your/services-demo:ellie-service-1.0.0

List images

aws ecr list-images --repository-name your/services-demo

Apply pod

kubectl apply -f Kubernetes/ellie-service/ellie-service-pod.yaml

Describe pod

kubectl describe pod PODNAME

Delete pod

kubectl delete pod POADNAME -n NAMESPACE

Apply load balancer

apiVersion: v1
kind: Service
metadata:
  name: ellie
  annotations:
    service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
  labels:
    app: ellie
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: ellie-http
  selector:
    app: ellie
  sessionAffinity: None

Get service

kubectl get service

Namespaces

It is a good idea to keep application seperated in namespaces, for example there could be the following namespaces:

  • servicename-development
  • servicename-staging
  • servicename-production

each namespace should have it's own resources. Stateful resources can be implemented for each namespace so that in the production environment they wouldn't get pulluted.

Exchange of data between services

When one service requires data from another, it should always communicate via the designated API. Accessing the database (or any stateful resource) is a no-go!

For example, the enozreppam service shouldn't directly use the SPV service's database, it should ask for information through the SPV's API.

Development

Should Developers Push to Production in Kubernetes?

YES ?

Benefits

  1. DevOps and CI/CD:
  2. Promotes regular and reliable code deployments.

  3. Shift Left:

  4. By owning the code lifecycle, developers emphasize reliability from the outset.

  5. Safety First:

  6. Deployment risks are reduced using methods such as feature flags and canary releases.

  7. Feedback:

  8. Immediate deployments lead to faster feedback and iterative adjustments.

  9. Concerns:

  10. It's important to be aware of the risks, such as potential outages, and ensure proper safeguards are in place.

  11. Maturity Matters:

  12. Organizations with well-established DevOps and testing are best equipped for direct deployments.

Let's go one step deeper

Developers should be empowered to deploy to production in Kubernetes due to:

  • Incremental Changes: Deployments should be based on small, tested merge requests. There's no need to wait for "enough" features for a release. If a feature is ready, deploy it!

  • Feature Flags: Even if a feature isn't fully complete, it can be hidden using feature flags, allowing for separation between deployment and feature unveiling.

  • Automation: Deployments should be automated, and with time enhanced with strategies like canary or blue/green deployments.

  • Organizational Commitment: While adopting this approach requires involvement from both management and devs. The DORA project shows its advantages.

But keep in mind that the decision often depends on an organization's specific needs and its maturity, not just the Kubernetes capabilities.

Note: The development culture at company needs a change. If this doesn't start now, achieving maturity and confidence in the CI/CD world may remain but a "fugazi". We need to trust our developers and learn from any early mistakes.

Helm

The helm is for k8s as npm (CLI) is to node (package manager). Used because they were among the first to publicly share the helm charts.

Helm charts

Is a boundle of YAML files (configs). You can create your own Helm Charts with Helm, push them to Helm repository, and download and use existing ones.

Used for: database apps, monitoring apps, ...

You can create a private registry for storing helm charts and reuse them within the organization.

Use https://artifacthub.io/ to get them. The website is for k8s as npm website is to node (collection of packages/helm charts).

Helm chart structure

mychart/
  Chart.yaml
  values.yaml
  charts/
  templates/
  ...
  • Top level mychart folder - name of the chart
  • Chart.yaml - meta info of the chart
  • values.yaml - values for the template files, default values can be overriden.
  • charts - chart dependency folder
  • templates - the actual template files

To deploy the files to k8s use:

helm install CHART_NAME

To override the default values in (values.yaml) use:

helm install --values=my-values.yaml CHART_NAME

Or:

helm install --set version=2.0.0 CHART_NAME

template files will be filled with the values from values.yaml, producing a valid k8s manifest.

NOTE: You can also add other files like Readme and LICENSE, ...

Template engine

Used for: - Defining a common blueprint in case where several configs are quite similar

Dynamic values are replaced by placeholders.

Example values.yaml (default values):

name: my-app
container:
  name: my-app-cnt
  image: my-app-image
  port: 8080

Example my-values.yaml;

container:
  name: my-app-cnt
  image: my-app-image
  port: 8081

Example template.yaml;

apiVersion: v1
kind: Pod
metadata:
  name: {{ .Values.name }}
spec:
  containers:
    - name: {{ .Values.container.name }}
      image: {{ .Values.container.image }}
      port: {{ .Values.container.port }}

NOTE:

  • You can also use --set flag on yaml file and this will pass the parameter.
  • Useful for CI/CD because in you build you can replace values on the fly before deployment.
  • The product will be a .Values object.

Other uses of helm

Same apps across different environments

development - staging - production, set your app to have the application chart, you can then only use one command to deploy things.

Release management

Helm version 2 vs 3:

  • Version 2 has Tiller (version control manager)
  • Version 3 removed it because of the security risks!

DNS and TLS

Defined DNS for k8s but it's inefficient http://ellie.domain.com/

there is a way to use ingress kubernetess with letsencrypt for certificates and keys

Option to do that in kubernetes is with ingress-nginx

Use helm to install ingress-nginx:

helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace