Kubernetes Engine

.https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

.https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-usage-metering#create_the_cost_breakdown_table

.https://github.com/j143/solutions-gke-autoprovisioning

Creating roles with kubectl

kubectl create role pod-reader \
> --resource=pods --verb=watch --verb=get --verb=list

Labs

Name of the lab

Quest link

Deploy Kubernetes to cloud

GKE best practices: security

Google Cloud's operation suite on GKE

Options

Configuration

Cluster named bootcamp status:

Custom scheduler

.https://banzaicloud.com/blog/k8s-custom-scheduler/

Secrets

Builtin Type

Usage

Opaque

arbitrary user-defined data

kubernetes.io/service-account-token

service account token

kubernetes.io/dockercfg

serialized ~/.dockercfg file

kubernetes.io/dockerconfigjson

serialized ~/.docker/config.json file

kubernetes.io/basic-auth

credentials for basic authentication

kubernetes.io/ssh-auth

credentials for SSH authentication

kubernetes.io/tls

data for a TLS client or server

bootstrap.kubernetes.io/token

bootstrap token data

Container Hub

.https://cloud.google.com/sdk/gcloud/reference/container/hub/memberships

kubectl config view

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://34.69.214.81
  name: gke_qwiklabs-gcp-03-da985698d629_us-central1-b_central
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://35.224.195.141
  name: remote.k8s.local
contexts:
- context:
    cluster: gke_qwiklabs-gcp-03-da985698d629_us-central1-b_central
    user: gke_qwiklabs-gcp-03-da985698d629_us-central1-b_central
  name: central
- context:
    cluster: remote.k8s.local
    user: remote.k8s.local
  name: remote
current-context: remote
kind: Config
preferences: {}
users:
- name: gke_qwiklabs-gcp-03-da985698d629_us-central1-b_central
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
- name: remote.k8s.local
  user: {}

Deployment docs

.https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

kubectl edit
kubectl rollout
kubectl create
kubectl get
kubectl describe deployments

Create Cluster

CLUSTER_VERSION=$(gcloud container get-server-config --region us-west1 --format='value(validMasterVersions[0])')

export CLOUDSDK_CONTAINER_USE_V1_API_CLIENT=false
gcloud container clusters create repd \
  --cluster-version=${CLUSTER_VERSION} \
  --machine-type=n1-standard-4 \
  --region=us-west1 \
  --num-nodes=1 \
  --node-locations=us-west1-a,us-west1-b,us-west1-c

For creating basic nodal cluster

gcloud container clusters create b-cluster --zone us-central1-a --node-locations us-central1-a --machine-type n1-standard-2 --num-nodes 2

For node pools https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create#--machine-type

gcloud container node-pools create optimized-pool --cluster=b-cluster --num-nodes=2  --machine-type=custom-2-3584

to drain the node pool nodes use

kubectl get nodes
# https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
kubectl drain <node name>

.https://cloud.google.com/kubernetes-engine/docs/add-on/config-sync/how-to/namespace-scoped-objects

Draining node-pool https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/

StorageClass creation

kubectl apply -f - <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: repd-west1-a-b-c
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  replication-type: regional-pd
  zones: us-west1-a, us-west1-b, us-west1-c
EOF

Persistent Volume Claim

kubectl apply -f - <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: data-wp-repd-mariadb-0
  namespace: default
  labels:
    app: mariadb
    component: master
    release: wp-repd
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 8Gi
  storageClassName: standard
EOF

Cluster credentials

.https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials

K8s docs

.Understanding and Combining GKE Autoscaling Strategies - https://www.qwiklabs.com/focuses/15636?parent=catalog

Kubernetes Security

Grafeas - API spec for managing metadata about software resources such as container images, virtual machines, Jar files, scripts.

Kritis - API for ensuring the deployment is prevented unless the artifact is conformant to central policy.

.https://google.qwiklabs.com/focuses/5154?parent=catalog Binary Authorization

.https://cloud.google.com/solutions/binary-auth-with-cloud-build-and-gke

.https://cloud.google.com/binary-authorization/docs/configuring-policy-cli

.https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/

# Turn on admission controller
kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger ...

Persistent volume claim resize

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-vol-default
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://192.168.10.100:8080"
  restuser: ""
  secretNamespace: ""
  secretName: ""
allowVolumeExpansion: true

Metadata endpoint protections

--metadata=disable-legacy-endpoints=true

gcloud container clusters create simplecluster --zone $MY_ZONE --num-nodes 2 --metadata=disable-legacy-endpoints=false
gcloud beta container node-pools create second-pool --cluster=simplecluster --zone=$MY_ZONE --num-nodes=1 --metadata=disable-legacy-endpoints=true --workload-metadata-from-node=SECURE

Federate multiple gke clusters with Anthos Service Mesh

.https://github.com/GoogleCloudPlatform/professional-services/tree/main/examples/anthos-service-mesh-multicluster

Shared VPC

Anthos service mesh 1.8 can be used for a single shared VPC, even across multiple projects.

SSL/TLS termination

TLS termination for external requests is supported with Anthos Service Mesh 1.0. Doing so requires modifying the Anthos Service Mesh setup files.

Anthos Service Mesh can be setup with https://cloud.google.com/service-mesh/docs/scripted-install/gke-asm-onboard-1-7#install_asm. A custom istio-operator.yaml file can be used by running install_asm with the --custom_overlay option.

In order for Istio (i.e., Anthos Service Mesh) to allow access to external services, change the egress policy to REGISTRY_ONLY. https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy

Security

Anthos Service Mesh has inherent security features (and limitations).

ASM inherently implements istio security best practices, such as namespaces and limited service accounts. Workload identity is an optional GKE-specific service account, limited to a namespace.

The Istio ingress gateway needs to be secured manually.

Container workload security

GKE cluster network policies allow you to define workload access across pods and namespaces. This is built on top of Kubernetes NetworkPolicy API.

Securing container workloads in GKE - involves a layered approach to node security, pod/container security contexts and pod security policies.

Container runtime (Containerd)

Use cos_containerd runtime for GKE clusters using Anthos Service Mesh.

External databases with Google Cloud SQL for PostgreSQL

Cloud SQL is external to GKE, thus requiring GKE to do SSL termination for external services. With Anthos Service Mesh, you can use an Istio ingress gateway, which allow SSL passthrough, so that the server certificates can reside in a container.

PostgreSQL uses application-level protocol negotiation for SSL connections. The Istio proxy currently uses TCP-level protocol negotiation. This causes the Istio proxy sidecar to error out during the SSL handshake, when it tries to auto-encrypt the connection with PostgreSQL.

Towards federated clusters

Anthos Service Mesh 1.8 can federate multiple GKE clusters. Taken as managed Istio in a single VPC, this container orchestration model takes GKE to its full potential, and can be configured using tools like terraform and shell scripts.

k8s

DaemonSet

.https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#writing-a-daemonset-spec

.https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/

.https://kubernetes.io/docs/concepts/overview/working-with-objects/object-management/

Updating host file system

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: hostpath
spec:
  containers:
  - name: hostpath
    image: google/cloud-sdk:latest
    command: ["/bin/bash"]
    args: ["-c", "tail -f /dev/null"]
    volumeMounts:
    - mountPath: /rootfs
      name: rootfs
  volumes:
  - name: rootfs
    hostPath:
      path: /
EOF

Last updated

Was this helpful?