Concourse Tutorial

Xuebin He

Latest posts by Xuebin He (see all)

Why we are here

If you are reading this blog, you probably already know the benefits of Continuous Integration and Continuous Delivery and Concourse is one of the great options out there. You may not know where and how to start this new DevOps journey. This blog will try to eliminate your concerns through the following Chapters.

Chapter 1

What is Concourse

Concourse was started as a side-project inside Pivotal. It became very popular quickly in the CloudFoudnry Community and Pivotal and now in the DevOps world, because it has benefits like a yml defined pipeline, containerized environment and well defined API for supporting different resources, etc.

Why Concourse

Let’s talk about some of the benefits a little.
The pipeline in Concourse is defined only by one yml file so that it can be placed into the project repository and can be tracked by VCS. In this case, the pipeline would always be alined with the code and tests. It’s also a good chance to put Dev and Ops sitting together.
Having containerized environments for different tests makes sure there is no pollution between tests and testing environments. And users have control over what docker images are used in each test and those Dockerfiles could also be tracked. If things go wrong during the tests, the user can inject into the failing container and perform debugging.
Concourse supports a reasonably large amount of plugins for different abilities like building and pushing Docker images, sending Slack messages to the team if builds failed… You get the idea.

How to deploy

You can always deploy Concourse using bosh if you are familiar with it. If not, don’t worry, Concourse also provides deployment using docker-compose.
Once it’s being deployed, we can download Concourse’s CLI named fly from its home page, and put it into our PATH.
First we need to login using:
This will create a yml file at ${HOME}/.flyrc containing the following lines:
In the above file, cool is an alias for this Concourse instance. Using this, we can use fly without typing IP and password every time. For example, if Concourse is upgraded and we want to upgrade the local fly, we can use the following command:

Chapter 2

Baby Crawl

A pipeline normally consists of many different tasks. Concourse is able to run a task in a container and react to its result. That’s why a task could make the pipeline very colorful. A task is defined as a yml file. This file only defines what is needed to run this task, and doesn’t specify how to get different dependencies into its running container.
For instance, lets say we have a project called cool-project, and this project needs to be run. This is done through a script; in this case, a-cool-task, which is placed in: cool-project/ci/tasks/a-cool-task.sh. This cool task is saying “I need to be running in a container from an Ubuntu image, and I’m expecting there to be a folder called cool-project placed into my working directory with the file cool-project/ci/tasks/a-cool-task.sh inside of it. Lastly, A_VERY_COOL_ENV is required for me to start”.
a-cool-task.yml
Say we have this cool-project at ${HOME}/workspace/cool-project, and our task definition file is colocated with the task script. We can run this task using the fly command like this:
The execute command will ask Concourse to run this task as one-off, not being part of any pipeline. Concourse will create a new container from ubuntu and populate A_VERY_COOL_ENV into this container. It will also copy the whole cool-project folder into the working directory, denoted by ${WORK_DIR} for now. Then, it will trigger ${WORK_DIR}/cool-project/ci/tasks/a-cool-task.sh to start the task script.
Normally, execute will help us a lot during the development stage of the pipeline itself, because it provides a much faster feedback loop.

Baby steps

Now, let’s build a dummy pipeline and put it into cool-project/ci/pipeline.yml. We can put all pipeline related files into the ci folder.
pipeline.yml
A pipeline normally has two parts, resources and jobs. Resources are the definitions for dependencies like a git repository such as cool-project or a Docker image, etc. Concourse is able to download those resources and put them into containers where the jobs are running.
Jobs are the core part of a pipeline, they define the whole workflow of the pipeline. For example, in the above pipeline, we only have one job a-cool-job, and it has two steps that will be running in sequence in the order written in the job definition. The first step is a get that will download the cool-project repo, and it will be triggered by pushing a new commit. The second step is to run the previous task that we defined (a-cool-task).
The last part is the variables wrapped by double braces {{}}. Those variables will be evaluated when we setup the pipeline. The value of these variables could be stored in a separate yml file that is not going to be put inside this repository, because this file normally contains some secrets like the following one:
secrets.yml 
We can setup the pipeline by:
Now if we open our pipeline in a browser, it should look something like this:
a-cool-pipeline
a-cool-pipeline.png
Black boxes stand for resources, while colored boxes stand for jobs.
If we click this job box, we should see the latest build of it.
a-cool-job
a-cool-job.png

Chapter 3

Making trouble

With the pipeline, we could run into 2 different kinds of trouble: the commits failed the pipeline or the pipeline failed itself. Let talk about the first one first.
Hooks
When the commits failed, our pipeline will make the job box red, and will not go any further. At this point, we probably want to do some damage control or “blame” the trouble maker (for example, send him/her a Slack message).
Concourse Hooks will enable us to trigger another script when the task itself has failed or succeeded.
For example, with the following job definition, if any step inside the plan fails, Concourse will run the task under the on_failure hook, which will alert this is not that cool! in this case. We can also do something when the job succeeded with on_success or when the job aborted with on_abort.
job.yml

 

a-cool-failing-job
a-cool-failing-job.png
Hijack
With hooks, we are able to do some damage control, but we still eventually need to know why it failed. We can hijack into the container that ran this particular task by:
Now we just need to select the number of the step where the job failed from the output.
Because Concourse has its own Garbage Collection system, it will remove inactive containers after a certain amount of time. We can actually set the interval of GC doing cleanup by adding the option –gc-interval to the start command of the Concourse web instance.
One-off
Like what we did in Baby Crawl we can debug a task separately with the same inputs from the pipeline. The reason why we do this is because we can get much faster feedback if we are trying to fix it. For the same reason, we also use one-off a lot while we are developing a new task.

Chapter 4

Playdate

Concourse cannot do everything. As users, we still need to write some code to solve some problems ourselves. For example, we are not able to pass artifacts being built from one job to the next job.
However, Concourse provides a well defined pattern for developers to create plugins to allow Concourse to do things that it would not normally be able to do. For example, it has an s3 plugin which makes it easy for us to upload or download artifacts from Amazon S3 Storage to pass artifacts between jobs.
Sometimes, our test involves multiple stages, like: clean existing servers, deploy new servers, then run test. Once a commit goes into these stages, we want to make sure that no other commits go into these stages again until the first commit is finished since we only have limited resources to run servers. To prevent other commits from going into these stages, we can use the plugin pool.
pipeline.yml
The above pipeline has two jobs to complete the acceptance test. The first job will try to acquire a-cool-lock from the resources locks, and will wait forever until it’s been released by the previous holder. If the job itself failed, it should release the lock. The lock should also be released at the end of the acceptance test, whether it failed or succeeded. This assures the lock will be available if no job is currently using it.

Chapter 5

Tricks

Dockerfile
Every task will be running in a fresh container. Most of the time, we have to install some dependencies for our tests or other tasks to be able to run. We shouldn’t let the pipeline spend too much time on something not directly related to the job(s) at hand.
Starting the fresh container that already has those dependencies preinstalled would make the feedback from the pipeline much faster. Because those Dockerfiles for the containers might be very specific, having them placed into ci/docker/ folder will help the team to maintain them and enjoy the benefit of being version tracked.
YML
Similar jobs may share a lot of the same steps or variables, and it might be very painful to keep these steps and variables concurrent throughout the entire pipeline definition as some of them change, because changing one step or variable would then require changing it again everywhere else that it appears.
For example, both of the jobs above are using Kubernetes and require secrets for the Kubernetes instance. &kube-secrets will create a reference to those params, and <<: *kube-secrets in the second job will be replaced by those params from the first job while setting up the pipeline using fly set-pipeline.

Deploy Kubernetes on vSphere using Bosh Step by step tutorial on deploying kubo deployment

Step by step tutorial on deploying kubo deployment

Thinh Nguyen

Software Engineer 2 at Dell EMC

How to deploy KUBO on vSphere

 

Thanks to Amanda Avarez for helping me writing this blog post

Table of contents

  1. Deploy Jumpbox
  2. Deploy Bosh Director
  3. Deploy Kubo

Our deployment uses the following configuration settings.

1. Deploy Jumpbox

Clone the repo

Create ops file

Create secrets file

Create jumpbox

Create ssh key for jumpbox

IP Forwarding for the Jumpbox

Let’s say that the jumpbox is going to have the following condition

  • eth0: Private Network connect to our local network

  • eth1: Public Network connect to the internet We want to access the public network through private network with the following steps

2. Deploy Bosh Director

Git clone kubo-deployment

Generate kubo-deployment env

Modify the secrets in kube env

Set jumpbox as Bosh Proxy and deploy bosh director. Access Director thru jumpbox (instead of being on the jumpbox)

The deployment may fail at last step due to unable to download bosh-dns-release?v=0.0.11. To fix this, download and upload the release manually:

When finish, try

3. Deploy Kubo

Download and upload releases

Deploy Kubo

Introducing Tyro [Powered by Dell EMC Dojo]! Your team's way of embarking on the Dojo Path to Enlightenment

Your team's way of embarking on the Dojo Path to Enlightenment

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

The Way is spreading like wildfire and we could not be happier about it. The methodology, in particular, has unprecedented buy-in from all levels. We get questions multiple times a day about how and when teams can pair with us. At first we were trying to accommodate all requests, and to do so by reiterating that it was an all or nothing sort of deal. We found that not only was this impossible with time and resource constraints on our end, it was proving to be impossible for those interested to get in the door. This defeats the entire evangelization portion of our two-fold mission. So, as is common practice, we went back to the drawing board and created a pivot.

It is with great pride and excitement that we introduce to you the Dojo Path to Enlightenment, a step by step (or belt by belt) program that outlines for you and your team EXACTLY how to adopt The Way. This four-belt program will lead your team to certified Dojo status (i.e. enlightenment) with the intermediary step of becoming a Tyro. Through ‘customer’ discovery and framing exercises, we have discovered that many teams within the organization want concrete goals that can be attained during the journey toward becoming a Dojo. So, we gathered this feedback and created the Tyro [powered by Dell EMC Dojo], which stands as the physical manifestation of the accomplishment reached by those who have completed the first two steps of the Dell EMC Dojo Path to Enlightenment. This is a center where practices as outlined by the Lean Startup and Running Lean are understood and implemented.

Sounds pretty cool, right? So what’s stopping you? Check out the chart below and get ready for the journey of a lifetime. As always, reach out to us if you have any further questions!

Cloud Foundry on Kubernetes

Amanda Alvarez

Amanda Alvarez

Amanda Alvarez

Latest posts by Amanda Alvarez (see all)

Hello 🙂

The Dojo team is here again. Recently, we worked on a project called Cloud Foundry (CF) on Kubernetes (K8S) and we are thrilled to share with you how we did it.

Table of contents:
1. Why CF on K8S?
2. Architecture
3. Demo Video

1.Why CF on K8S?

Putting CF on K8S is a good idea because of its ease of deployment, resource utilization, and flexibility.

  • The overall deployment of CF on K8S is simpler than other IaaS because creating and destroying containers is quicker than that of VMs. Therefore, we are saving a significant amount of time and resources when we deploy CF components, which involves more than ten VMs. Woah! 😮
  • Having CF on K8S helps utilize resources better. A traditional Diego Cell VM consumes more resources when NATS or Consul VMs are deployed as separate VMs because resources are assigned to each VM. For example, the NATS and Consul VMs each need 2GB of RAM on top of the Diego Cell using 5GB of RAM, but this not an efficient way of using resources as this is not scalable. Instead of deploying two VMs for NATS and Consul, we can deploy these as jobs using two containers sitting inside a node (or VM) that the Diego Cell has access to. Inside these nodes, the containers share the same resources that are allocated for the VMs.
  • We can pretend K8S is acting as an IaaS to put CF on top of. Knowing this, CF can be in any environment K8S is on because K8S can be deployed on top of any IaaS (including bare metal).

2.Architecture

Figure 1: CF on K8S on GCP architecture
There is a Kubernetes cluster of nodes (or VMS) that reside in GCP. Inside of the Kubernetes cluster, there are CF components that are utilizing K8S nodes. The Diego Cells nodes are separate from the main CF components because it is difficult to run containers within a container (as Diego is also a container orchestrator).

3.Demo Video

Special thanks to the kubernetes bosh cpi team from SAP for helping us. Please check out their repo at kubernetes_cpi.

A Week in the Dojo

Amanda Alvarez

Amanda Alvarez

Amanda Alvarez

Latest posts by Amanda Alvarez (see all)

Hello readers! My name is Amanda and I am the Dojo’s newest member here in Cambridge. Words cannot describe how excited I am to be here! I am normally afraid of big changes, but I felt comfortable with this new beginning as I had a gut feeling this team would help me begin my career in this journey. It really helped my anxiety when Victor Fong gave everyone on the team a Lego Pokemon toy after he just returned from a trip. Here he is!

Once I met my team I jumped right into standup with literally no time being wasted to get my day started. Shortly after, I was paired to work on UI project. This exposure to code without needing documentation about it really blew my mind away. However, I get to see Ruby, HTML/CSS, git, and angular JS all in one day! Later, I got to rotate with my team to work on deploying Kubernetes. It can be challenging to understand what is going on, but that is because this is my first time really using things like Ruby or cloud technologies. My week with this new team finished with retro on Friday, which is when we all get together and talk about the good, okay, and bad things that happened during the week. I admitted in the bad category I felt ashamed for taking too longto learn, but I was reassured by the whole team these things take time and that I will get better.

So what have I learned?

  • Ask lots of questions. People want you to learn! There is no such thing as a stupid question. 🙂
  • Things will break. Sometimes it is an easy fix like adding a missing parenthesis. Sometimes it is a challenge that takes a day or two to figure out.
  • Ruby is a weird language. That is all I have to say about that.
  • DevOps is a really efficient way of rapidly delivering code.
  • Test Driven Development and pair programming made my first week feel almost seamless. I say “almost” because I have so much learning to do in order to get familiar with this kind of environment.
  • Tools such as Diego, Kubernetes, and Bosh can do many “things.” You might ask, “What kind of things?” And I could probably tell you they help manage deployment of containers and VMs.
  • Don’t be afraid to make mistakes. They’re something to be learned from.
  • Everyone has something to bring to the table. Share your ideas, even if you might be disagreeing with others.

Hopefully this gives you an idea of how much I learned during my first week, I am fortunate to be working with intelligent individuals who make up an amazing team. From my past experiences, I have never worked so closely with people as I have been since working here. No more contributing to one thing from the confinements of the cube of solitude and instead working with people on multiple project at a given moment. It is so easy to ask anyone what is going on because they are all familiar with the ongoing projects that are happening. Being able to pair with someone has definitely made my transition into this role feel easier. This team is passionate about what they do, and it really motivates me to do my best to get up to speed with their skills. At the time of this post, I have been working in this role two weeks now and the time feels like it elapsed in seconds. I wake up everyday to come to work feeling energized and thrilled to be at the office. Hopefully I can share something more technical next time!

~$ whoami
Before I go, I should probably share a few things about me. My favourite hobbies include: gardening, 3D printing, video games, and reading. This year I have successfully grown various herbs, such as basil and parsley, and I am an avid succulent/cactus collector. I like to 3D print miniatures and tiles that get painted for D&D, which I play occasionally when I find a good group to play with. I have always been a PS2 girl at heart, but I have been playing PC games for the last 4 years now. Lastly, I like to read mostly sci-fi books and I am currently reading through Stephen King’s “The Dark Tower” series. This pretty much sums me up outside of work. So feel free to reach out if you ever want to talk about what I do for work or my hobbies! I would love to get to know more people in this community. 🙂

Why The Dojo Matters a guide to digital transformation wrapped up in a scipab

a guide to digital transformation wrapped up in a scipab

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

 

The Situation | Digital transformation has become a nuanced term. Somewhat like ice cream in the summer. When you see someone eating your favorite flavor on a hot summer day, it incites an immediate craving. You know that you want the cone, that it will bring you a sense of happiness not only in the taste, but also maybe in the social outing it will revolve around. But then you think about that bathing suit you hope to wear later in the day or the water that will then be needed to quench your thirst, and oftentimes there is hesitation in fulfilling the craving you know ultimately will bring no regret. Okay, maybe that’s a little too distilled of an explanation, but I hope you get the point. Digital Transformation. We know it is important, in fact it is the impending future no matter how much you try to avoid or deny it. But then by embarking on the journey, you also know that it will create work and a probable disruption in your comfortable ‘plan’. So you begin to question the value. And begin to cringe at the term, or try to validate your thought that the term has too much hype.

The Complication | This hesitancy and resistance in diving head first into the process actually hinders long-term success. Companies that are not investing fully all in money, support and effort are running the risk of falling behind competitors.  In order to build more qualitative products at a more rapid speed, there comes a time where the company, each of its teams and its employees need to embrace and overcome true digital transformation. But it’s hard. Really hard.

The Implication | As seen in the image attached, companies are running the risk of losing the opportunity to obtain that expected 30% revenue by 2020 where customers are investing to be a part of the movement. Not only this, but if companies don’t move quickly they will miss the sweet spot in the maturity of digital transformation that their competitors are gaining as time lapses. This most immediately causes depreciation of Net Promoter Score, which arguably now is more important than any vanity metric (i.e. how many lines of code are being written, number of commits, etc.) ever was. And once trust is lost, it is close to impossible to rebuild especially in the Fortune 500 customer base.

Position | It is now more than ever that we need to look past the nuance and move our teams toward modernization. At the Dojo, our mission is two-fold; to practice modern software development methodology (XP, Lean Startup) and to further evangelize ‘The Way’ to internal Dell EMC product teams, and to contribute to Cloud Foundry Foundation sanctioned OS projects. We are very lucky to work for a company that is investing in and understands fully that in order to stay alive, and most importantly thrive as IT leaders, we must continue to scale in this world of Digital Transformation. Our power at the Dojo lies in the buy-in from all levels.

Action | Our power as Dell EMC on the Digital Transformation world stage lies in the buy-in from every member of the company. It has been proven time and time again that customers LOVE the modern way in which we are building software. There are definitely challenges to rewiring the way that we work and the way that we measure the work we produce, but with hard work, comes not only a thrilling journey, but a highly productive one that produces amazingly positive results. There is no better time than now to jump on this Digital Transformation train.

Benefit | Use the Dojo as a testament and witness to all of the aforementioned sentiments and Digital Traction Metrics as seen in the attached image. Join us in paving the path to the Future. And eat an ice cream cone while you are at it.

Deploy Kubernetes on vSphere using BOSH – Kubo

Xuebin He

Latest posts by Xuebin He (see all)

Introduction


During CloudFoundry Summit 2017, Kubo was released. The name originated from the combination of Kubernetes and Bosh. Now we can deploy Kubernetes on many different IaaS using Bosh. It’s the first step to integrate Kubernetes into CloudFoundry.

In this post, we are going to deploy a Kubernetes instance on vSphere using Bosh.

Prerequisite


We suppose you already have a Bosh Director running, one public network and one private network ready on vSphere. Your cloud-config would look like this:

cloud-config.yml

All capitalized fields and IP fields should be replaced with correct values based on your vSphere settings.

We use our bosh director as private network gateway by setting up iptables on bosh director following this instruction.

Deploy


We are going to use kubo-release from CloudFoundry Community. More deploy instructions could be found here.

1. Download releases

We need to download three releases: kubo, etcd and docker. Then upload them to bosh director.

2. Generate certificates

Kubernetes requires certificates for the communication between api server and kubelets, and also between clients and api server. The following script will do the job for us. Replace API_PRIVATE_IP and API_PUBLIC_IP with private IP and public IP for Kubernetes api server.

key-generator.sh

3. Fill bosh deployment manifest

Replace the red fields with the correct values. And paste the contents of the certificate files, generated above, into the correspondent fields.

kubernetes.yml

In order to access deployed Kubernetes instance, we need to create a config file:

~/.kube/config

After your bosh deployment is done, you should be able to type kubectl cluster-info and see this:

Test


We can test our Kubernetes by creating a simple Redis deployment using following deployment file:

redis.yml

kubectl create --filename redis.yml will deploy redis. If we type kubectl describe pods redis-master, we should not see any errors.

If you have any questions, leave a comment here or email xuebin.he@emc.com. Thank you!

Deploy Kafka cluster by Kubernetes

Xuebin He

Latest posts by Xuebin He (see all)

Introduction


This blog will show you how to deploy Apache Kafka cluster on Kubernetes. We assume you already have kubernetes setup and running.

Apache Kafka is a distributed streaming platform which enables you to publish and subscribe to streams of records, similar to enterprise messaging system.

There are few concepts we need to know:

  • Producer: an app that publish messages to a topic in Kafka cluster.
  • Consumer: an app that subscribe a topic for messages in Kafka cluster.
  • Topic:  a stream of records.
  • Record: a data block contains a key, a value and a timestamp.

We borrowed some ideas from defuze.org and updated our cluster accordingly.

Pre-start


Zookeeper is required to run Kafka cluster.

In order to deploy Zookeeper in an easy way, we use a popular Zookeeper image from Docker Hub which is  digitalwonderland/zookeeper. We can create a deployment file zookeeper.yml which will deploy one zookeeper server.

If you want to scale the Zookeeper cluster, you can basically duplicate the code block into the same file and change the configurations to correct values. Also you need to add ZOOKEEPER_SERVER_2=zoo2 to the container env for zookeeper-deployment-1 if scaling to have 2 servers.

zookeeper.yml

We can deploy this by:

It’s good to have a service for Zookeeper cluster. We have a file zookeeper-service.yml to create a service. If you need to scale up the Zookeeper cluster, you also need to scale up the service accordingly.

zookeeper-service.yml

Deploy Kafka cluster


Service

We need to create a Kubernetes service first to shadow our Kafka cluster deployment. There is no leader server in terms of server level, so we can talk to any of the server. Because of that, we can redirect our traffic to any of the Kafka servers.

Let’s say we want to route all our traffic to our first Kafka server with id: "1". We can generate a file like this to create a service for Kafka.

kafka-service.yml

After the service being created, we can get the external IP of the Kafka service by:

Kafka Cluster

There is already a well defined Kafka image on Docker Hub. In this blog, we are going to use the image  wurstmeister/kafka to simplify the deployment.

kafka-cluster.yml

If you want to scale up Kafka Cluster, you can always duplicate a deployment into this file, changing KAFKA_BROKER_ID to another value.

KAFKA_CREATE_TOPICS is optional. If you set it to topic1:3:3, it will create topic1 with 3 partitions and 3 replicas.

Test Setup

We can test the Kafka cluster by a tool named kafkacat. It can be used by both Producers and Consumers.
To publish system logs to topic1, we can type:

To consume the same logs, we can type:

Upgrade Kafka


Blue-Green update

Kafka itself support rolling upgrade, you can have more detail at this page.

Since we can access Kafka by any broker of the cluster, we can upgrade one pod at a time. Let’s say our Kafka service routing traffic to broker1, we can upgrade all other broker instances first. Then we can change the service to route traffic to any of the upgraded broker. At last, upgrade broker1.

We can upgrade our broker by replacing the image to the version we want like:

image: wurstmeister/kafka:$NEW_VERSION, then do:

After applying the same procedure to all other brokers, we can edit our service by:

Change id: "1"to another upgraded broker. Save it and quit. All new connections would be established to the new broker.
At the end, we could upgrade broker1 using above step. But it will kill previous connections of producers and consumers to broker1.

Kubernetes and UDP Routing

Hey Guys, Gary Here.

With all of the fun stuff happening around Kubernetes and Cloud Foundry, we decided to do some fun stuff to play around with it! One of the (few) capabilities we don’t have with Cloud foundry that we can get with Kubernetes is UDP routing.

To learn more about why UDP routing doesn’t work with the containers in Diego runtime (yet, but will), check out ONSI’s proposal for the feature.

UDP Routing. Why would you use it? In short, for applications that continually post data that isn’t important enough, or would soon be replaced with a more recent copy anyways, UDP packets can be a less intensive alternative than using the TCP routing solution. Or, if you’re really hardcore, you could implement your own verification with UDP, but that would be a blog post in itself 🙂

Overall, setting up Kubernetes and getting it to expose ports was very simple. If you are reading this without any Kubernetes setup, go check out minikube. Even better, you could set up a GCP cluster, vSphere, or (gasp) AWS and follow along. The kubectl commands should be about the same either way.

Once you’ve got your instance set up, check out our kube-udp-tennis repo on Github. We use this  repo to store very simple python scripts that accept environment variables for ports and will either send or receive messages based on which script we execute. We also baked these into a Dockerfile to allow Kubernetes to reference an image on docker hub.

Before you worry about deploying your own docker images, know that you are not required to for this example. If you were to deploy the listener, add the service link, then go ahead and deploy the server, this solution would be a working UDP connection! This is because it’s referencing our existing images already on the Docker Hub. Before I go and give you the commands, I want to explain what they do.

from /udp_listen:

this command will go into the udplisten-deployment.yaml file, which gives the specification for our udp-listen application. We spec this out so we can extend it for the udp-listen service.

this command will go into the udplisten-service.yaml file, which after the udplisten deployment has been made live, will allow us to talk into the port through the service functionality in Kubernetes. Here’s the documentation for services.

At this point, we will have the kubernetes udplisten service running, and we will be ready to deploy our dummy application to talk into it.

from /udp_server:

This will deploy the udpserver application, and should ping messages into the udplisten-service, which you should see through the logs in the service’s pod.

The way that the udp-server.py application can find and ping into the udplisten-service is by leveraging the Kubernetes Service Functionality. Basically, when we start Kubernetes services, we will be able to find those services using environment variables. From the documentation:

For example, the Service "redis-master" which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11 produces the following environment variables:

[/crayon]
We search, therefore, for the udplistener_service_host and udplistener_service_port to communicate with the udplistener pods directly. Since we defined the UDP protocol as network traffic into the service, this works right out of the box!

Thanks for reading everyone, as always, reach out to us on twitter @DellEMCDojo, or me specifically @garypwhitejr, or post on the blog to get some feedback. Let us know what you think!

Until next time,

Cloud Foundry, Open Source, The way. #DellEMCDojo

Spreading The Way Announcing the Dojo in Bangalore!

Announcing the Dojo in Bangalore!

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

It is with unbelievable excitement that we are officially announcing the opening of our third global branch with a Dell EMC Dojo in Bangalore! By sharing our DevOps and Xtreme programming culture, including but not exclusive to the practices of pair programming, test driven development and lean product development at scale, we have the deepest confidence that Bangalore is the geographical mecca that sets the tone of Digital Transformation we hope for in the larger company.

So what does this mean beyond the logistical rollercoaster that comes with opening a new office? Well, I’m glad you asked!

We are Hiring! Over the next few weeks, we will be rapidly and qualitatively (only because how else would we operate?) looking for and interviewing developers and product managers interested in becoming a part of this exciting new Dojo from its inception. So, if you know of anyone in the area that may be interested, please point them in the direction of Sarv Saravanan (sarv.saravanan@emc.com) who will be handling the process on the ground.

 

Otherwise, stay tuned on our team’s impending growth, engagement (both here and in India), and overall adventure!

Until next time…