Author Archive

Xuebin He

Latest posts by Xuebin He (see all)

Concourse Tutorial

Why we are here

If you are reading this blog, you probably already know the benefits of Continuous Integration and Continuous Delivery and Concourse is one of the great options out there. You may not know where and how to start this new DevOps journey. This blog will try to eliminate your concerns through the following Chapters.

Chapter 1

What is Concourse

Concourse was started as a side-project inside Pivotal. It became very popular quickly in the CloudFoudnry Community and Pivotal and now in the DevOps world, because it has benefits like a yml defined pipeline, containerized environment and well defined API for supporting different resources, etc.

Why Concourse

Let’s talk about some of the benefits a little.
The pipeline in Concourse is defined only by one yml file so that it can be placed into the project repository and can be tracked by VCS. In this case, the pipeline would always be alined with the code and tests. It’s also a good chance to put Dev and Ops sitting together.
Having containerized environments for different tests makes sure there is no pollution between tests and testing environments. And users have control over what docker images are used in each test and those Dockerfiles could also be tracked. If things go wrong during the tests, the user can inject into the failing container and perform debugging.
Concourse supports a reasonably large amount of plugins for different abilities like building and pushing Docker images, sending Slack messages to the team if builds failed… You get the idea.

How to deploy

You can always deploy Concourse using bosh if you are familiar with it. If not, don’t worry, Concourse also provides deployment using docker-compose.
Once it’s being deployed, we can download Concourse’s CLI named fly from its home page, and put it into our PATH.
First we need to login using:
This will create a yml file at ${HOME}/.flyrc containing the following lines:
In the above file, cool is an alias for this Concourse instance. Using this, we can use fly without typing IP and password every time. For example, if Concourse is upgraded and we want to upgrade the local fly, we can use the following command:

Chapter 2

Baby Crawl

A pipeline normally consists of many different tasks. Concourse is able to run a task in a container and react to its result. That’s why a task could make the pipeline very colorful. A task is defined as a yml file. This file only defines what is needed to run this task, and doesn’t specify how to get different dependencies into its running container.
For instance, lets say we have a project called cool-project, and this project needs to be run. This is done through a script; in this case, a-cool-task, which is placed in: cool-project/ci/tasks/a-cool-task.sh. This cool task is saying “I need to be running in a container from an Ubuntu image, and I’m expecting there to be a folder called cool-project placed into my working directory with the file cool-project/ci/tasks/a-cool-task.sh inside of it. Lastly, A_VERY_COOL_ENV is required for me to start”.
a-cool-task.yml
Say we have this cool-project at ${HOME}/workspace/cool-project, and our task definition file is colocated with the task script. We can run this task using the fly command like this:
The execute command will ask Concourse to run this task as one-off, not being part of any pipeline. Concourse will create a new container from ubuntu and populate A_VERY_COOL_ENV into this container. It will also copy the whole cool-project folder into the working directory, denoted by ${WORK_DIR} for now. Then, it will trigger ${WORK_DIR}/cool-project/ci/tasks/a-cool-task.sh to start the task script.
Normally, execute will help us a lot during the development stage of the pipeline itself, because it provides a much faster feedback loop.

Baby steps

Now, let’s build a dummy pipeline and put it into cool-project/ci/pipeline.yml. We can put all pipeline related files into the ci folder.
pipeline.yml
A pipeline normally has two parts, resources and jobs. Resources are the definitions for dependencies like a git repository such as cool-project or a Docker image, etc. Concourse is able to download those resources and put them into containers where the jobs are running.
Jobs are the core part of a pipeline, they define the whole workflow of the pipeline. For example, in the above pipeline, we only have one job a-cool-job, and it has two steps that will be running in sequence in the order written in the job definition. The first step is a get that will download the cool-project repo, and it will be triggered by pushing a new commit. The second step is to run the previous task that we defined (a-cool-task).
The last part is the variables wrapped by double braces {{}}. Those variables will be evaluated when we setup the pipeline. The value of these variables could be stored in a separate yml file that is not going to be put inside this repository, because this file normally contains some secrets like the following one:
secrets.yml 
We can setup the pipeline by:
Now if we open our pipeline in a browser, it should look something like this:
a-cool-pipeline
a-cool-pipeline.png
Black boxes stand for resources, while colored boxes stand for jobs.
If we click this job box, we should see the latest build of it.
a-cool-job
a-cool-job.png

Chapter 3

Making trouble

With the pipeline, we could run into 2 different kinds of trouble: the commits failed the pipeline or the pipeline failed itself. Let talk about the first one first.
Hooks
When the commits failed, our pipeline will make the job box red, and will not go any further. At this point, we probably want to do some damage control or “blame” the trouble maker (for example, send him/her a Slack message).
Concourse Hooks will enable us to trigger another script when the task itself has failed or succeeded.
For example, with the following job definition, if any step inside the plan fails, Concourse will run the task under the on_failure hook, which will alert this is not that cool! in this case. We can also do something when the job succeeded with on_success or when the job aborted with on_abort.
job.yml

 

a-cool-failing-job
a-cool-failing-job.png
Hijack
With hooks, we are able to do some damage control, but we still eventually need to know why it failed. We can hijack into the container that ran this particular task by:
Now we just need to select the number of the step where the job failed from the output.
Because Concourse has its own Garbage Collection system, it will remove inactive containers after a certain amount of time. We can actually set the interval of GC doing cleanup by adding the option –gc-interval to the start command of the Concourse web instance.
One-off
Like what we did in Baby Crawl we can debug a task separately with the same inputs from the pipeline. The reason why we do this is because we can get much faster feedback if we are trying to fix it. For the same reason, we also use one-off a lot while we are developing a new task.

Chapter 4

Playdate

Concourse cannot do everything. As users, we still need to write some code to solve some problems ourselves. For example, we are not able to pass artifacts being built from one job to the next job.
However, Concourse provides a well defined pattern for developers to create plugins to allow Concourse to do things that it would not normally be able to do. For example, it has an s3 plugin which makes it easy for us to upload or download artifacts from Amazon S3 Storage to pass artifacts between jobs.
Sometimes, our test involves multiple stages, like: clean existing servers, deploy new servers, then run test. Once a commit goes into these stages, we want to make sure that no other commits go into these stages again until the first commit is finished since we only have limited resources to run servers. To prevent other commits from going into these stages, we can use the plugin pool.
pipeline.yml
The above pipeline has two jobs to complete the acceptance test. The first job will try to acquire a-cool-lock from the resources locks, and will wait forever until it’s been released by the previous holder. If the job itself failed, it should release the lock. The lock should also be released at the end of the acceptance test, whether it failed or succeeded. This assures the lock will be available if no job is currently using it.

Chapter 5

Tricks

Dockerfile
Every task will be running in a fresh container. Most of the time, we have to install some dependencies for our tests or other tasks to be able to run. We shouldn’t let the pipeline spend too much time on something not directly related to the job(s) at hand.
Starting the fresh container that already has those dependencies preinstalled would make the feedback from the pipeline much faster. Because those Dockerfiles for the containers might be very specific, having them placed into ci/docker/ folder will help the team to maintain them and enjoy the benefit of being version tracked.
YML
Similar jobs may share a lot of the same steps or variables, and it might be very painful to keep these steps and variables concurrent throughout the entire pipeline definition as some of them change, because changing one step or variable would then require changing it again everywhere else that it appears.
For example, both of the jobs above are using Kubernetes and require secrets for the Kubernetes instance. &kube-secrets will create a reference to those params, and <<: *kube-secrets in the second job will be replaced by those params from the first job while setting up the pipeline using fly set-pipeline.

Deploy Kubernetes on vSphere using BOSH – Kubo

Introduction


During CloudFoundry Summit 2017, Kubo was released. The name originated from the combination of Kubernetes and Bosh. Now we can deploy Kubernetes on many different IaaS using Bosh. It’s the first step to integrate Kubernetes into CloudFoundry.

In this post, we are going to deploy a Kubernetes instance on vSphere using Bosh.

Prerequisite


We suppose you already have a Bosh Director running, one public network and one private network ready on vSphere. Your cloud-config would look like this:

cloud-config.yml

All capitalized fields and IP fields should be replaced with correct values based on your vSphere settings.

We use our bosh director as private network gateway by setting up iptables on bosh director following this instruction.

Deploy


We are going to use kubo-release from CloudFoundry Community. More deploy instructions could be found here.

1. Download releases

We need to download three releases: kubo, etcd and docker. Then upload them to bosh director.

2. Generate certificates

Kubernetes requires certificates for the communication between api server and kubelets, and also between clients and api server. The following script will do the job for us. Replace API_PRIVATE_IP and API_PUBLIC_IP with private IP and public IP for Kubernetes api server.

key-generator.sh

3. Fill bosh deployment manifest

Replace the red fields with the correct values. And paste the contents of the certificate files, generated above, into the correspondent fields.

kubernetes.yml

In order to access deployed Kubernetes instance, we need to create a config file:

~/.kube/config

After your bosh deployment is done, you should be able to type kubectl cluster-info and see this:

Test


We can test our Kubernetes by creating a simple Redis deployment using following deployment file:

redis.yml

kubectl create --filename redis.yml will deploy redis. If we type kubectl describe pods redis-master, we should not see any errors.

If you have any questions, leave a comment here or email xuebin.he@emc.com. Thank you!

Deploy Kafka cluster by Kubernetes

Introduction


This blog will show you how to deploy Apache Kafka cluster on Kubernetes. We assume you already have kubernetes setup and running.

Apache Kafka is a distributed streaming platform which enables you to publish and subscribe to streams of records, similar to enterprise messaging system.

There are few concepts we need to know:

  • Producer: an app that publish messages to a topic in Kafka cluster.
  • Consumer: an app that subscribe a topic for messages in Kafka cluster.
  • Topic:  a stream of records.
  • Record: a data block contains a key, a value and a timestamp.

We borrowed some ideas from defuze.org and updated our cluster accordingly.

Pre-start


Zookeeper is required to run Kafka cluster.

In order to deploy Zookeeper in an easy way, we use a popular Zookeeper image from Docker Hub which is  digitalwonderland/zookeeper. We can create a deployment file zookeeper.yml which will deploy one zookeeper server.

If you want to scale the Zookeeper cluster, you can basically duplicate the code block into the same file and change the configurations to correct values. Also you need to add ZOOKEEPER_SERVER_2=zoo2 to the container env for zookeeper-deployment-1 if scaling to have 2 servers.

zookeeper.yml

We can deploy this by:

It’s good to have a service for Zookeeper cluster. We have a file zookeeper-service.yml to create a service. If you need to scale up the Zookeeper cluster, you also need to scale up the service accordingly.

zookeeper-service.yml

Deploy Kafka cluster


Service

We need to create a Kubernetes service first to shadow our Kafka cluster deployment. There is no leader server in terms of server level, so we can talk to any of the server. Because of that, we can redirect our traffic to any of the Kafka servers.

Let’s say we want to route all our traffic to our first Kafka server with id: "1". We can generate a file like this to create a service for Kafka.

kafka-service.yml

After the service being created, we can get the external IP of the Kafka service by:

Kafka Cluster

There is already a well defined Kafka image on Docker Hub. In this blog, we are going to use the image  wurstmeister/kafka to simplify the deployment.

kafka-cluster.yml

If you want to scale up Kafka Cluster, you can always duplicate a deployment into this file, changing KAFKA_BROKER_ID to another value.

KAFKA_CREATE_TOPICS is optional. If you set it to topic1:3:3, it will create topic1 with 3 partitions and 3 replicas.

Test Setup

We can test the Kafka cluster by a tool named kafkacat. It can be used by both Producers and Consumers.
To publish system logs to topic1, we can type:

To consume the same logs, we can type:

Upgrade Kafka


Blue-Green update

Kafka itself support rolling upgrade, you can have more detail at this page.

Since we can access Kafka by any broker of the cluster, we can upgrade one pod at a time. Let’s say our Kafka service routing traffic to broker1, we can upgrade all other broker instances first. Then we can change the service to route traffic to any of the upgraded broker. At last, upgrade broker1.

We can upgrade our broker by replacing the image to the version we want like:

image: wurstmeister/kafka:$NEW_VERSION, then do:

After applying the same procedure to all other brokers, we can edit our service by:

Change id: "1"to another upgraded broker. Save it and quit. All new connections would be established to the new broker.
At the end, we could upgrade broker1 using above step. But it will kill previous connections of producers and consumers to broker1.

Using Docker Container in Cloud Foundry

Using Docker Container in Cloud Foundry

As we all know, we can push source code to CF directly, and CF will compile it and create a container to run our application. Life is so great with CF.

But sometimes, for some reason, such as our App needs a special setup or we want to run an app on different platforms or infrastructures, we may already have a preconfigured container for our App. This won’t block our way to CF at all. This post will show you how to push docker images to CF.

Enable docker feature for CF

We can turn on docker support with the following cf command

We can also turn it off by

Push docker image to CF

Unlike the normal way, CF won’t try to build our code and run it inside the image we specified. CF would assume that you already put  everything you need into your docker image. We have to rebuild the docker image every time we push a change to our repository.

We also need to tell CF how to start our app inside the image by specifying the start command. We can either put it as an argument for cf push or put it into manifest.yml as below.

In this example, we are using an official docker image from docker hub. In the start command, we clone our demo repo from Github, do something and run our code.

Update Diego with private docker registry

If you are in the EMC network, you may not able to use Docker Hub due to certificate issues. In this case, you need to setup a private docker registry. The version of registry needs to be V2 for now. Also, you have to redeploy your CF or Diego with the changes being shown below.

Replace 12.34.56.78:9000 with your own docker registry ip and port.

Then, you need to create a security group to reach your private docker registry. You can put the definition of this security group into docker.json as shown below

And run

Now you can re-push to CF by

How to Set up a Concourse Pipeline Xuebin He Dojo Developer

Xuebin He Dojo Developer

How to Set up a Concourse Pipeline

The first step to continuous integration is setting up your own CI pipeline. The #EMCdojo uses Concourse for our own pipeline and we love it! Concourse (the official CI tool for Cloud Foundry) can pull commmitted code and run tests against it, and even create a release after passing tests.

Before I tell you HOW, I’ll tell you WHY

In our workspace, our pipeline monitor is displayed on a wall right next to the team. A red box (aka failed task) is a glaring indicator that something went wrong. Usually the first person who notices shouts out “Ooh! What happened?” and then we roll up our sleeves and start debugging. Each job block can be clicked on to get output logs about what happened. The Concourse CLI lets you ‘hijack’ the container running the job for hands-on debugging. Combining these tools, it’s usually fairly quick to find a problem and fix it.

Having this automated setup, it’s easy to push small features one at a time into production and see their immediate effect on the product. We can see if the feature breaks any existing tests (unit, integration, lifecycle, etc). We also push new tests with the new feature and those are added to the pipeline. At the end of the pipeline, we know for sure if the feature is done, or still needs more work.

Step 1: Set up Concourse

Set up Server

The easiest way to set up concourse is using vagrant

You can access your concourse at 192.168.100.4:8080

Download Concourse cli

You can only start, pause, and stop pipelines or tasks on the concourse website. If you want to configure the pipeline, you have to download fly from concourse. Fly is the name of concourse cli.

Step 2: Configure Pipeline

Make a CI Folder

You can generate your CI folder under the root of your project. See code block below.

pipeline.yml will define what your pipeline looks like.

pipeline.yml

So now your pipeline should look like this:

pipeline

Using groups, we can make a different combination of jobs. Each job can have several tasks. The tasks are located here ci/tasks/*.yml.

task.yml

This defines a task. A task is like a function from inputs to outputs that succeed or fail. Each task runs in a seperate container that requires you to give the address of the docker image that you want use. You can put Dockerfile under ci/docker/. Inputs are already defined in pipeline.yml. The duplication here is to make it easy for us to run one-off tests. The outputs of the task can be reused by later tasks in the same job.

Make a Secret File

You have to generate a secret file that has all of the environments required by the pipeline. All required environments are in pipeline.yml wrapped by double curly braces.

secrets.yml

Set Pipeline

Start Pipeline

The initial state of the pipeline is paused. You have to start it by clicking the menu button on the concourse website OR with fly by:

Run One-off

You can run a one-off test for a specific job. This will not be shown in pipeline.

one-off.sh

The lines above fly execute are the environment variables, and lines below are the inputs of that task. Those are already defined in ci/tasks/*.yml.

Debug

You can hijack into the container thats running the task that you want to debug by:

If you run one-off, you can just run:

You can find build number by clicking the top right button on your pipeline page.

And you’re done! And remember: Continuous Integration = Continuous Confidence.

If you have any questions, please comment below.