Introduction
During CloudFoundry Summit 2017, Kubo was released. The name originated from the combination of Kubernetes and Bosh. Now we can deploy Kubernetes on many different IaaS using Bosh. It’s the first step to integrate Kubernetes into CloudFoundry.
In this post, we are going to deploy a Kubernetes instance on vSphere using Bosh.
Prerequisite
We suppose you already have a Bosh Director running, one public network and one private network ready on vSphere. Your cloud-config would look like this:
cloud-config.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
azs: - name: z1 cloud_properties: datacenters: - name: Datacenter persistent_datastore_pattern: DATASTORE_NAME datastore_pattern: DATASTORE_NAME clusters: - CLUSTER_NAME: resource_pool: RESOURCE_POOL_NAME vm_types: - name: small cloud_properties: cpu: 1 ram: 1024 disk: 10_000 - name: medium cloud_properties: cpu: 2 ram: 2048 disk: 20_000 disk_types: - name: small disk_size: 10_000 - name: medium disk_size: 20_000 networks: - name: public subnets: - cloud_properties: name: PUBLIC_NETWORK_NAME dns: [ PUBLIC_DNS ] gateway: PUBLIC_GATEWAY reserved: [ 1.2.3.4-1.2.3.5 ] az: z1 range: 1.2.3.0/24 static: [ 1.2.3.6-1.2.3.255 ] - name: private type: manual subnets: - range: 1.2.0.0/16 reserved: [ 1.2.0.1-1.2.0.10 ] az: z1 gateway: PRIVATE_GATEWAY dns: [ PRIVATE_DNS ] cloud_properties: name: PRIVATE_NETWORK_NAME static: [ 1.2.0.15-1.2.0.255 ] compilation: workers: 10 reuse_compilation_vms: true az: z1 vm_type: medium network: private |
All capitalized fields and IP fields should be replaced with correct values based on your vSphere settings.
We use our bosh director as private network gateway by setting up iptables on bosh director following this instruction.
Deploy
We are going to use kubo-release from CloudFoundry Community. More deploy instructions could be found here.
1. Download releases
We need to download three releases: kubo, etcd and docker. Then upload them to bosh director.
2. Generate certificates
Kubernetes requires certificates for the communication between api server and kubelets, and also between clients and api server. The following script will do the job for us. Replace API_PRIVATE_IP and API_PUBLIC_IP with private IP and public IP for Kubernetes api server.
key-generator.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
#!/bin/sh set -e -x path=certs # certificate authority certstrap --depot-path ${path} init --passphrase '' --common-name cert-authority # server server_cn=kubo certstrap --depot-path ${path} request-cert --passphrase '' --common-name ${server_cn} --ip API_PRIVATE_IP,API_PUBLIC_IP certstrap --depot-path ${path} sign ${server_cn} --CA cert-authority # client client_cn=kubelet certstrap --depot-path ${path} request-cert --passphrase '' --common-name ${client_cn} certstrap --depot-path ${path} sign ${client_cn} --CA cert-authority |
3. Fill bosh deployment manifest
Replace the red fields with the correct values. And paste the contents of the certificate files, generated above, into the correspondent fields.
kubernetes.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
name: kubo director_uuid: DIRECTOR_UUID releases: - name: etcd version: 108+dev.2 - name: kubo version: latest - name: docker version: 28.0.1 stemcells: - alias: trusty os: ubuntu-trusty version: latest instance_groups: - name: etcd instances: 1 networks: - name: private azs: [z1] jobs: - name: etcd release: etcd properties: etcd: require_ssl: false peer_require_ssl: false stemcell: trusty vm_type: small persistent_disk_type: small - name: master instances: 1 networks: - name: private static_ips: [API_PRIVATE_IP] - name: public default: [dns,gateway] static_ips: [API_PUBLIC_IP] azs: [z1] jobs: - name: kubernetes-api release: kubo properties: admin-username: ADMIN_USERNAME admin-password: ADMIN_PASSWORD kubelet-password: KUBELET_PASSWORD tls: kubernetes: &tls-kubernetes ca: | GENERATED_CA_CERT_CONTENT certificate: | GENERATED_KUBO_CERT_CONTENT private_key: | GENERATED_KUBO_KEY_CONTENT - name: kubeconfig release: kubo properties: kubernetes-api-url: &kubo_url "https://API_PRIVATE_IP:8443" kubelet-passworc: KUBELET_PASSWORD tls: kubernetes: *tls-kubernetes - name: kubernetes-controller-manager release: kubo - name: kubernetes-scheduler release: kubo - name: kubernetes-system-specs release: kubo properties: kubernetes-api-url: *kubo_url stemcell: trusty vm_type: medium - name: worker instances: 2 networks: - name: kubo azs: [z1] jobs: - name: flanneld release: kubo - name: docker release: docker properties: docker: flannel: true iptables: false ip_masq: false log_level: error storage_driver: overlay env: {} - name: kubeconfig release: kubo properties: kubernetes-api-url: *kubo_url kubelet-password: KUBELET_PASSWORD tls: kubernetes: *tls-kubernetes - name: kubelet release: kubo properties: kubernetes-api-url: *kubo_url tls: kubelet: ca: | GENERATE_CA_CERT_CONTENT certificate: | GENERATE_KUBELET_CERT_CONTENT private_key: | GENERATE_KUBELET_KEY_CONTENT - name: kubernetes-proxy release: kubo properties: kubernetes-api-url: *kubo_url stemcell: trusty vm_type: medium persistent_disk_type: medium update: canaries: 1 max_in_flight: 1 serial: true canary_watch_time: 1000-30000 update_watch_time: 1000-30000 |
In order to access deployed Kubernetes instance, we need to create a config file:
~/.kube/config
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: PATH_TO_CA_CERT server: https://API_PUBLIC_IP:8443 name: kubo users: - name: admin user: token: ADMIN_PASSWORD contexts: - context: cluster: kubo user: admin name: kubo-context current-context: kubo-context |
After your bosh deployment is done, you should be able to type kubectl cluster-info and see this:
1 2 |
Kubernetes master is running at https://API_PUBLIC_IP:8443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. |
Test
We can test our Kubernetes by creating a simple Redis deployment using following deployment file:
redis.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 |
kubectl create --filename redis.yml will deploy redis. If we type kubectl describe pods redis-master, we should not see any errors.
If you have any questions, leave a comment here or email xuebin.he@emc.com. Thank you!