{"__v":52,"_id":"5772ca4ad98d530e00a31e9d","category":{"__v":2,"_id":"56ccf29a431ada1f00e85aae","pages":["56ccf3498c4a331d002c1e1e","56ccf35a8c4a331d002c1e21"],"project":"55c6bec1b9aa4e0d0016c2c3","version":"55c6bec1b9aa4e0d0016c2c6","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-02-24T00:00:26.717Z","from_sync":false,"order":4,"slug":"code-labs","title":"Code Labs"},"parentDoc":null,"project":"55c6bec1b9aa4e0d0016c2c3","user":"56e1901aa71e9e200066cdf6","version":{"__v":8,"_id":"55c6bec1b9aa4e0d0016c2c6","project":"55c6bec1b9aa4e0d0016c2c3","createdAt":"2015-08-09T02:45:21.683Z","releaseDate":"2015-08-09T02:45:21.683Z","categories":["55c6bec2b9aa4e0d0016c2c7","56c14bc5826df10d00e82230","56cceed8723ad71d00cae46c","56ccf29a431ada1f00e85aae","56ccf3c28fa8b01b00b82018","56ce1e6ee538330b0021ac5d","56f97e9a4c612020008f2eaf","5734fafd146eb82000597261"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"updates":["589a1816d775872500985a5d"],"next":{"pages":[],"description":""},"createdAt":"2016-06-28T19:04:42.472Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":3,"body":"In this codelab you will be creating a set of basic pipelines for deploying code from a Github repo to a Kubernetes cluster in the form of a Docker container.\n\nGiven that there are a number of fully-featured docker registries that both store and build images, Spinnaker doesn't build Docker images but instead depends on any registry that does.\n\nThe workflow generally looks like this:\n  1. Push a new tag to your registry. (Existing tag changes are ignored for the sake of traceability - see below).\n  2. Spinnaker sees the tag and deploys the new tag in a fresh Replica Set, and optionally deletes or disables any old Replica Sets running this image.\n  3. The deployment is verified externally.\n  4. Spinnaker now redeploys this image into a new environment (production), and disables the old version the Replica Set was managing\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Existing tags are ignored for the sake of traceability\",\n  \"body\": \"The rational is that repeatedly deploying the same tag (<code>:latest</code>, <code>:v2-stable</code>, etc...) reduces visibility into what version of your application is actually serving traffic. This is particularly true when you are deploying new versions several times a day, and different environments (staging, prod, test, dev, etc...) will each have different versions changing at different cadences. \\n\\nOf course, there is nothing wrong with updating a stable tag after pushing a new tag to your registry to maintain a vetted docker image. There are also ways to ensure only a subset of docker tags for a particular image can trigger pipelines, but that'll be discussed in a future codelab.\"\n}\n[/block]\n# 0. Setup\n\nWe need a few things to get this working. \n\n  1. [A Github repo containing the code we want to deploy](http://www.spinnaker.io/v1.0/docs/kubernetes-source-to-prod#section-configuring-github).\n  2. [A Dockerhub repo configured to build on changes to the above repo](http://www.spinnaker.io/v1.0/docs/kubernetes-source-to-prod#section-configuring-dockerhub).\n  3. [A running Kubernetes cluster](http://www.spinnaker.io/v1.0/docs/kubernetes-source-to-prod#section-configuring-kubernetes).\n  4. [A running Spinnaker deployment configured with the contents of steps 2 and 3](http://www.spinnaker.io/v1.0/docs/kubernetes-source-to-prod#section-configuring-kubernetes).\n\n## Configuring Github\n\nThe code I'll be deploying is stored [here](https://github.com/lwander/spin-kub-demo). Feel free to fork this into your own account, and make changes/deploy from there. What's needed is a working <code>Dockerfile</code> at the root of the repository that can be used to build some artifact that you want to deploy. If you're completely unfamiliar with Docker, I recommend starting [here](https://docs.docker.com/engine/getstarted/).\n\n## Configuring Dockerhub\n\nCreate a new [repository on Dockerhub](https://hub.docker.com/). [This guide](https://docs.docker.com/docker-hub/builds/) covers how to get your Github repository hooked up to your new Dockerhub repository by creating an automated build that will take code changes and build Docker images for you. In the end your repository should look something like [this](https://hub.docker.com/r/lwander/spin-kub-demo/).\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Make sure Github is configured to send events to Dockerhub\",\n  \"body\": \"This can be set under your Github repositories Settings > Webhooks & Services > Services > Docker.\"\n}\n[/block]\n## Configuring Kubernetes\n\nFollow one of the guides [here](http://kubernetes.io/docs/getting-started-guides/). Once you are finished, make sure that you have an up-to-date <code>~/.kube/config</code> file that points to whatever cluster you want to deploy to. Details on kubeconfig files [here](http://kubernetes.io/docs/user-guide/kubeconfig-file/).\n\n## Configuring Spinnaker\n\nWe will be deploying Spinnaker to the same Kubernetes cluster it will be managing. To do so, follow the steps in [this guide](https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple), being sure to use [this section](https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple/#anything-else-except-for-ecr) to configure your registry.\n\n# 1. Create a Spinnaker Application\n\nSpinnaker applications are groups of resources managed by the underlying cloud provider, and are delineated by the naming convention `<app name>-`. Since Spinnaker and a few other Kubernetes-essential pods are already running in your cluster, your _Applications_ tab will look something like this:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/DGVwp6usRgSjEniNqNvd_applications.png\",\n        \"applications.png\",\n        \"1563\",\n        \"483\",\n        \"#3c546b\",\n        \"\"\n      ],\n      \"caption\": \"Applications data and spin were created by Spinnaker, the rest were created by Kubernetes.\"\n    }\n  ]\n}\n[/block]\nUnder the _Actions_ dropdown select _Create Application_ and fill out the following dialog:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/jgatmNQLSFSbl53dYSGx_appfill.png\",\n        \"appfill.png\",\n        \"674\",\n        \"565\",\n        \"#4a595e\",\n        \"\"\n      ],\n      \"caption\": \"If you've followed the Source to Prod tutorial for the VM based providers, you'll remember that you needed to select \\\"Consider only cloud provider health when executing tasks\\\". Since Kubernetes is the sole health provider by definition, selecting this here is redundant, and unnecessary.\"\n    }\n  ]\n}\n[/block]\nYou'll notice that you were dropped in this _Clusters_ tab for your newly created application. In Spinnaker's terminology a _Cluster_ is a collection of _Server Groups_ all running different versions of the same artifact (Docker Image). Furthermore, _Server Groups_ are Kubernetes [Replica Sets](http://kubernetes.io/docs/user-guide/replicasets/), with support for [Deployments](http://kubernetes.io/docs/user-guide/deployments/) incoming.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/ae7v5KS8RGsH6cI6USKL_clusterscreen.png\",\n        \"clusterscreen.png\",\n        \"1568\",\n        \"826\",\n        \"#3b5267\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\n# 2. Create a Load Balancer\n\nWe will be creating a pair of Spinnaker _Load Balancers_ (Kubernetes [Services](http://kubernetes.io/docs/user-guide/services/)) to serve traffic to our _dev_ and _prod_ versions of our app. Navigate to the _Load Balancers_ tab, and select _Create Load Balancer_ in the top right corner of the screen. \n\nFirst we will create the _dev_ _Load Balancer_:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/VMwxEcKzTU6pd1UHZuhv_devlb.png\",\n        \"devlb.png\",\n        \"1001\",\n        \"981\",\n        \"#3f586e\",\n        \"\"\n      ],\n      \"caption\": \"The fields highlighted in red are the ones we need to fill out. \\\"Port\\\" is the port the load balancer will be listening out, and \\\"Target Port\\\" is the port our server is listening on. \\\"Stack\\\" exists for naming purposes.\"\n    }\n  ]\n}\n[/block]\nOnce the _dev_ _Load Balancer_ has been created, we will create an external-facing load balancer. Select _Create Load Balancer_ again:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/x4tGpwX0T3SVlrsysejU_prodlb.png\",\n        \"prodlb.png\",\n        \"996\",\n        \"981\",\n        \"#3e576d\",\n        \"\"\n      ],\n      \"caption\": \"Fill out the fields in red again, changing \\\"Load Balancer IP\\\" to a static IP reserved using your underlying cloudprovider. If you do not have a static IP reserved, you may leave \\\"Load Balancer IP\\\" blank and an ephemeral IP will be assigned.\"\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"If your cloudprovider (GKE, AWS, etc...) doesn't support Type: LoadBalancer\",\n  \"body\": \".... you may need to change _Type_ to _Node Port_. Read more [here](http://kubernetes.io/docs/user-guide/services/#publishing-services---service-types).\"\n}\n[/block]\nAt this point your _Load Balancers_ tab should look like this:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/qzY1pO6kTZeNQjdQH4J6_loadbalancers.png\",\n        \"loadbalancers.png\",\n        \"1256\",\n        \"704\",\n        \"#3b536a\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\n# 3. Create a Demo Server Group\n\nNext we will create a _Server Group_ as a sanity check to make sure we have set up everything correctly so far. Before doing this, ensure you have at least 1 tag pushed to your Docker registry with the code you want to deploy. Now on the _Clusters_ screen, select _Create Server Group/Job_, choose _Server Group_ from the drop down and hit _Next_ to see the following dialog:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/JRMxxbaSQ1mmD5VH8EtD_firstSG1.png\",\n        \"firstSG1.png\",\n        \"1006\",\n        \"986\",\n        \"#fb3404\",\n        \"\"\n      ],\n      \"caption\": \"Make sure that you select the `-dev` load balancer that we selected earlier.\"\n    }\n  ]\n}\n[/block]\nScroll down to the newly created _Container_ subsection, and edit the following fields:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/qaBd9hZQZakHXVxNp57c_firstSG2.png\",\n        \"firstSG2.png\",\n        \"1011\",\n        \"981\",\n        \"#4d8c9b\",\n        \"\"\n      ],\n      \"caption\": \"Under the \\\"Probes\\\" subsection, select \\\"Enable Readiness Probe\\\". This will prevent pipelines and deploys from continuing until the containers pass the supplied check and report themselves as \\\"Healthy\\\".\"\n    }\n  ]\n}\n[/block]\nOnce the create task completes, open a terminal and type <code>$ kubectl proxy --port 7777</code>, and now navigate in your browser to http://localhost:7777/api/v1/proxy/namespaces/default/services/serve-dev:80/ to see if your application is serving traffic correctly.\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"kubectl proxy\",\n  \"body\": \"`kubectl proxy` forwards traffic to the Kubernetes API server authenticated using your local `~./kube/config` credentials. This way we can peek into what the internal `serve-dev` service is serving on port 80.\"\n}\n[/block]\nOnce you're satisfied, don't close the proxy or browser tab just yet as we'll use that again soon.\n\n# 4. Git to _dev_ Pipeline\n\nNow let's automate the process of creating server groups associated with the _dev_ loadbalancer. Navigate to the _Pipelines_ tab, select _Configure_ > _Create New..._ and then fill out the resulting dialog as follows:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/g7b322SFRlO4fg48LS6k_createdevdeploy.png\",\n        \"createdevdeploy.png\",\n        \"671\",\n        \"286\",\n        \"#395961\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nIn the resulting page, select _Add Trigger_, and fill the form out as follows:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/ZV0WoYPyTQSwLJ1CvysC_dockertrigger.png\",\n        \"dockertrigger.png\",\n        \"1256\",\n        \"866\",\n        \"#f35029\",\n        \"\"\n      ],\n      \"caption\": \"The \\\"Organization\\\" and \\\"Image\\\" will likely be different, as you have set up your own Docker repository.\\n\\nThe \\\"Tag\\\" can be a regex matching a tag name patterns for valid triggers. Leaving it blank serves as \\\"trigger on any new tag\\\".\"\n    }\n  ]\n}\n[/block]\nNow select _Add Stage_ just below _Configuration_, and fill out the form as follows:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/Aq6satGqTCKuRG64LXnu_deploydev.png\",\n        \"deploydev.png\",\n        \"1211\",\n        \"811\",\n        \"#6c8b90\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nNext, in the _Server Groups_ box select _Add Server Group_, where you will use the already deployed server group as a template like so:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/kUBKsh2RwOT2f2PMieVE_templateselection.png\",\n        \"templateselection.png\",\n        \"1027\",\n        \"318\",\n        \"#3e76d6\",\n        \"\"\n      ],\n      \"caption\": \"Any server group in this app can be used as a template, and vastly simplifies configuration (since most configuration is copied over). This includes replica sets deployed with \\\"kubectl create -f <file.yml>\\\".\"\n    }\n  ]\n}\n[/block]\nIn the resulting dialog, we only need to make one change down in the _Container_ subsection. Select the image that will come from the Docker trigger as shown below:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/myFzaIjTxuAqfemPebZQ_configuredynamic.png\",\n        \"configuredynamic.png\",\n        \"1006\",\n        \"986\",\n        \"#3f566f\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nLastly, we want to add a stage to destroy the previous server group in this _dev_ cluster. Select _Add Stage_, and fill out the form as follows:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/nx0SldzSmuijKB4N8LDA_destroysg.png\",\n        \"destroysg.png\",\n        \"1216\",\n        \"791\",\n        \"#5a86ae\",\n        \"\"\n      ],\n      \"caption\": \"Make sure to select \\\"default\\\" as the namespace, and \\\"toggle for list of clusters\\\" to make cluster selection easier. \\\"Target\\\" needs to be \\\"Previous Server Group\\\", so whatever was previously deployed is deleted after our newly deployed server group is \\\"Healthy\\\".\"\n    }\n  ]\n}\n[/block]\n#5. Verification Pipeline\n\nBack on the _Pipelines_ dialog, create a new pipeline as before, but call it \"Manual Judgement\". On the first screen, add a Pipeline trigger as shown below:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/RY6xIdMlR1SXv2C4ED86_judgetrigger.png\",\n        \"judgetrigger.png\",\n        \"1221\",\n        \"671\",\n        \"#f3542e\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nWe will only add a single stage, which will serve to gate access to the _prod_ environment down the line. The configuration is shown here:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/1helP7sSUKjmk1UtRJud_manjudge.png\",\n        \"manjudge.png\",\n        \"1221\",\n        \"786\",\n        \"#5b3a32\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nKeep in mind, more advanced types of verification can be done here, such as running a Kubernetes batch job to verify that your app is healthy, or calling out to an external Jenkins server. For the sake of simplicity we will keep this as \"manual judgement\".\n\n# 6. Promote to _prod_\n\nCreate a new pipeline titled \"Deploy to Prod\", and configure a pipeline trigger as shown here:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/X3INVNHjRzqdFETWuqgv_mantrigger.png\",\n        \"mantrigger.png\",\n        \"1221\",\n        \"666\",\n        \"#f3522c\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nNow we need to find the deployed image in _dev_ that we previously verified. Add a new stage and configure it as follows:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/NftKIHtUQkCXfA3oQr9j_fimage.png\",\n        \"fimage.png\",\n        \"1221\",\n        \"831\",\n        \"#fb3404\",\n        \"\"\n      ],\n      \"caption\": \"Select the \\\"default\\\" namespace, the \\\"serve-dev\\\" cluster using \\\"Toggle for list of clusters\\\", and make sure to select \\\"Newest\\\" as the \\\"Server Group Selection\\\". \\n\\n\\\"Image Name Pattern\\\" can be used when multiple different images are deployed in a single Kubernetes Pod. Since we only have a single deployed container we can safely use \\\".*\\\" as the pattern.\"\n    }\n  ]\n}\n[/block]\nNow, to deploy that resolved image, add a new stage and configure it as follows:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/FsfZpwIfQ62bm7G5ctZY_proddeploy1.png\",\n        \"proddeploy1.png\",\n        \"1216\",\n        \"831\",\n        \"#6a8c92\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nSelect _Add Server Group_, and again use the _dev_ deployment as a template:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/C0VZMELhTAqqScv0XbaO_templateselection.png\",\n        \"templateselection.png\",\n        \"1027\",\n        \"318\",\n        \"#3e76d6\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nThis time we need to make three changes to the template. First, change the \"stack\" to represent our _prod_ cluster:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/B4zMXkqsSDy4cKRgW9Lc_changestack.png\",\n        \"changestack.png\",\n        \"1006\",\n        \"981\",\n        \"#c28b76\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nNext in the load balancers section:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/vPByp3geSgGgogZkbptg_prodlbse.png\",\n        \"prodlbse.png\",\n        \"1006\",\n        \"986\",\n        \"#3e5266\",\n        \"\"\n      ],\n      \"caption\": \"We want to attach this server group to the \\\"prod\\\" load balancer, so make sure to remove the \\\"dev\\\" load balancer with the trash-can icon, and select the \\\"serve-prod\\\" load balancer in its place.\"\n    }\n  ]\n}\n[/block]\nLastly in the container section:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/ASBOqj8Tw2nQOwfyTcyQ_fimingres.png\",\n        \"fimingres.png\",\n        \"1006\",\n        \"986\",\n        \"#3f77d7\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nNow to prevent all prior versions of this app in production from serving traffic once the deploy finishes, we will add a \"Disable Cluster\" stage like so:\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/HMjKsHhTTqvL3oDdC5K6_disablecluster.png\",\n        \"disablecluster.png\",\n        \"1226\",\n        \"801\",\n        \"#537f87\",\n        \"\"\n      ],\n      \"caption\": \"You will need to manualy enter \\\"serve-prod\\\" as the cluster name since it doesn't exist yet.\"\n    }\n  ]\n}\n[/block]\nSave the pipeline, and we are ready to go!\n\n# 7. Run the Pipeline\n\nPush a new branch to your repo, and wait for the pipeline to run.\n\n<code>NEW_VERSION=v1.0.0\n git checkout -b $NEW_VERSION\n git push origin $NEW_VERSION</code>\n\nOnce the Manual Judgement stage is hit, open http://localhost:7777/api/v1/proxy/namespaces/default/services/serve-dev:80/ to \"verify\" your deployment, and hit _continue_ once you are ready to promote to _prod_.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/cv8j0EeQe2y4jqA6metn_manjudge2.png\",\n        \"manjudge2.png\",\n        \"1211\",\n        \"766\",\n        \"#3d536a\",\n        \"\"\n      ],\n      \"caption\": \"Selecting \\\"stop\\\" will cause the deployment to fail, and the next pipeline won't trigger.\"\n    }\n  ]\n}\n[/block]\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/AA5gceLKTnCggsVY6cRx_trigfind.png\",\n        \"trigfind.png\",\n        \"1246\",\n        \"831\",\n        \"#3f5668\",\n        \"\"\n      ],\n      \"caption\": \"Notice that the \\\"Find Image\\\" phase automatically finds the tag that we triggered the first pipeline, as it was the one we verified earlier.\"\n    }\n  ]\n}\n[/block]\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/5POkkeL5QvygBdPA1HGQ_linktocluster.png\",\n        \"linktocluster.png\",\n        \"1221\",\n        \"776\",\n        \"#3b576b\",\n        \"\"\n      ],\n      \"caption\": \"To verify, check the public facing cluster's service's endpoint circled above.\"\n    }\n  ]\n}\n[/block]","excerpt":"","slug":"kubernetes-source-to-prod","type":"basic","title":"Kubernetes Source To Prod"}

Kubernetes Source To Prod


In this codelab you will be creating a set of basic pipelines for deploying code from a Github repo to a Kubernetes cluster in the form of a Docker container. Given that there are a number of fully-featured docker registries that both store and build images, Spinnaker doesn't build Docker images but instead depends on any registry that does. The workflow generally looks like this: 1. Push a new tag to your registry. (Existing tag changes are ignored for the sake of traceability - see below). 2. Spinnaker sees the tag and deploys the new tag in a fresh Replica Set, and optionally deletes or disables any old Replica Sets running this image. 3. The deployment is verified externally. 4. Spinnaker now redeploys this image into a new environment (production), and disables the old version the Replica Set was managing [block:callout] { "type": "info", "title": "Existing tags are ignored for the sake of traceability", "body": "The rational is that repeatedly deploying the same tag (<code>:latest</code>, <code>:v2-stable</code>, etc...) reduces visibility into what version of your application is actually serving traffic. This is particularly true when you are deploying new versions several times a day, and different environments (staging, prod, test, dev, etc...) will each have different versions changing at different cadences. \n\nOf course, there is nothing wrong with updating a stable tag after pushing a new tag to your registry to maintain a vetted docker image. There are also ways to ensure only a subset of docker tags for a particular image can trigger pipelines, but that'll be discussed in a future codelab." } [/block] # 0. Setup We need a few things to get this working. 1. [A Github repo containing the code we want to deploy](http://www.spinnaker.io/v1.0/docs/kubernetes-source-to-prod#section-configuring-github). 2. [A Dockerhub repo configured to build on changes to the above repo](http://www.spinnaker.io/v1.0/docs/kubernetes-source-to-prod#section-configuring-dockerhub). 3. [A running Kubernetes cluster](http://www.spinnaker.io/v1.0/docs/kubernetes-source-to-prod#section-configuring-kubernetes). 4. [A running Spinnaker deployment configured with the contents of steps 2 and 3](http://www.spinnaker.io/v1.0/docs/kubernetes-source-to-prod#section-configuring-kubernetes). ## Configuring Github The code I'll be deploying is stored [here](https://github.com/lwander/spin-kub-demo). Feel free to fork this into your own account, and make changes/deploy from there. What's needed is a working <code>Dockerfile</code> at the root of the repository that can be used to build some artifact that you want to deploy. If you're completely unfamiliar with Docker, I recommend starting [here](https://docs.docker.com/engine/getstarted/). ## Configuring Dockerhub Create a new [repository on Dockerhub](https://hub.docker.com/). [This guide](https://docs.docker.com/docker-hub/builds/) covers how to get your Github repository hooked up to your new Dockerhub repository by creating an automated build that will take code changes and build Docker images for you. In the end your repository should look something like [this](https://hub.docker.com/r/lwander/spin-kub-demo/). [block:callout] { "type": "info", "title": "Make sure Github is configured to send events to Dockerhub", "body": "This can be set under your Github repositories Settings > Webhooks & Services > Services > Docker." } [/block] ## Configuring Kubernetes Follow one of the guides [here](http://kubernetes.io/docs/getting-started-guides/). Once you are finished, make sure that you have an up-to-date <code>~/.kube/config</code> file that points to whatever cluster you want to deploy to. Details on kubeconfig files [here](http://kubernetes.io/docs/user-guide/kubeconfig-file/). ## Configuring Spinnaker We will be deploying Spinnaker to the same Kubernetes cluster it will be managing. To do so, follow the steps in [this guide](https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple), being sure to use [this section](https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple/#anything-else-except-for-ecr) to configure your registry. # 1. Create a Spinnaker Application Spinnaker applications are groups of resources managed by the underlying cloud provider, and are delineated by the naming convention `<app name>-`. Since Spinnaker and a few other Kubernetes-essential pods are already running in your cluster, your _Applications_ tab will look something like this: [block:image] { "images": [ { "image": [ "https://files.readme.io/DGVwp6usRgSjEniNqNvd_applications.png", "applications.png", "1563", "483", "#3c546b", "" ], "caption": "Applications data and spin were created by Spinnaker, the rest were created by Kubernetes." } ] } [/block] Under the _Actions_ dropdown select _Create Application_ and fill out the following dialog: [block:image] { "images": [ { "image": [ "https://files.readme.io/jgatmNQLSFSbl53dYSGx_appfill.png", "appfill.png", "674", "565", "#4a595e", "" ], "caption": "If you've followed the Source to Prod tutorial for the VM based providers, you'll remember that you needed to select \"Consider only cloud provider health when executing tasks\". Since Kubernetes is the sole health provider by definition, selecting this here is redundant, and unnecessary." } ] } [/block] You'll notice that you were dropped in this _Clusters_ tab for your newly created application. In Spinnaker's terminology a _Cluster_ is a collection of _Server Groups_ all running different versions of the same artifact (Docker Image). Furthermore, _Server Groups_ are Kubernetes [Replica Sets](http://kubernetes.io/docs/user-guide/replicasets/), with support for [Deployments](http://kubernetes.io/docs/user-guide/deployments/) incoming. [block:image] { "images": [ { "image": [ "https://files.readme.io/ae7v5KS8RGsH6cI6USKL_clusterscreen.png", "clusterscreen.png", "1568", "826", "#3b5267", "" ] } ] } [/block] # 2. Create a Load Balancer We will be creating a pair of Spinnaker _Load Balancers_ (Kubernetes [Services](http://kubernetes.io/docs/user-guide/services/)) to serve traffic to our _dev_ and _prod_ versions of our app. Navigate to the _Load Balancers_ tab, and select _Create Load Balancer_ in the top right corner of the screen. First we will create the _dev_ _Load Balancer_: [block:image] { "images": [ { "image": [ "https://files.readme.io/VMwxEcKzTU6pd1UHZuhv_devlb.png", "devlb.png", "1001", "981", "#3f586e", "" ], "caption": "The fields highlighted in red are the ones we need to fill out. \"Port\" is the port the load balancer will be listening out, and \"Target Port\" is the port our server is listening on. \"Stack\" exists for naming purposes." } ] } [/block] Once the _dev_ _Load Balancer_ has been created, we will create an external-facing load balancer. Select _Create Load Balancer_ again: [block:image] { "images": [ { "image": [ "https://files.readme.io/x4tGpwX0T3SVlrsysejU_prodlb.png", "prodlb.png", "996", "981", "#3e576d", "" ], "caption": "Fill out the fields in red again, changing \"Load Balancer IP\" to a static IP reserved using your underlying cloudprovider. If you do not have a static IP reserved, you may leave \"Load Balancer IP\" blank and an ephemeral IP will be assigned." } ] } [/block] [block:callout] { "type": "warning", "title": "If your cloudprovider (GKE, AWS, etc...) doesn't support Type: LoadBalancer", "body": ".... you may need to change _Type_ to _Node Port_. Read more [here](http://kubernetes.io/docs/user-guide/services/#publishing-services---service-types)." } [/block] At this point your _Load Balancers_ tab should look like this: [block:image] { "images": [ { "image": [ "https://files.readme.io/qzY1pO6kTZeNQjdQH4J6_loadbalancers.png", "loadbalancers.png", "1256", "704", "#3b536a", "" ] } ] } [/block] # 3. Create a Demo Server Group Next we will create a _Server Group_ as a sanity check to make sure we have set up everything correctly so far. Before doing this, ensure you have at least 1 tag pushed to your Docker registry with the code you want to deploy. Now on the _Clusters_ screen, select _Create Server Group/Job_, choose _Server Group_ from the drop down and hit _Next_ to see the following dialog: [block:image] { "images": [ { "image": [ "https://files.readme.io/JRMxxbaSQ1mmD5VH8EtD_firstSG1.png", "firstSG1.png", "1006", "986", "#fb3404", "" ], "caption": "Make sure that you select the `-dev` load balancer that we selected earlier." } ] } [/block] Scroll down to the newly created _Container_ subsection, and edit the following fields: [block:image] { "images": [ { "image": [ "https://files.readme.io/qaBd9hZQZakHXVxNp57c_firstSG2.png", "firstSG2.png", "1011", "981", "#4d8c9b", "" ], "caption": "Under the \"Probes\" subsection, select \"Enable Readiness Probe\". This will prevent pipelines and deploys from continuing until the containers pass the supplied check and report themselves as \"Healthy\"." } ] } [/block] Once the create task completes, open a terminal and type <code>$ kubectl proxy --port 7777</code>, and now navigate in your browser to http://localhost:7777/api/v1/proxy/namespaces/default/services/serve-dev:80/ to see if your application is serving traffic correctly. [block:callout] { "type": "info", "title": "kubectl proxy", "body": "`kubectl proxy` forwards traffic to the Kubernetes API server authenticated using your local `~./kube/config` credentials. This way we can peek into what the internal `serve-dev` service is serving on port 80." } [/block] Once you're satisfied, don't close the proxy or browser tab just yet as we'll use that again soon. # 4. Git to _dev_ Pipeline Now let's automate the process of creating server groups associated with the _dev_ loadbalancer. Navigate to the _Pipelines_ tab, select _Configure_ > _Create New..._ and then fill out the resulting dialog as follows: [block:image] { "images": [ { "image": [ "https://files.readme.io/g7b322SFRlO4fg48LS6k_createdevdeploy.png", "createdevdeploy.png", "671", "286", "#395961", "" ] } ] } [/block] In the resulting page, select _Add Trigger_, and fill the form out as follows: [block:image] { "images": [ { "image": [ "https://files.readme.io/ZV0WoYPyTQSwLJ1CvysC_dockertrigger.png", "dockertrigger.png", "1256", "866", "#f35029", "" ], "caption": "The \"Organization\" and \"Image\" will likely be different, as you have set up your own Docker repository.\n\nThe \"Tag\" can be a regex matching a tag name patterns for valid triggers. Leaving it blank serves as \"trigger on any new tag\"." } ] } [/block] Now select _Add Stage_ just below _Configuration_, and fill out the form as follows: [block:image] { "images": [ { "image": [ "https://files.readme.io/Aq6satGqTCKuRG64LXnu_deploydev.png", "deploydev.png", "1211", "811", "#6c8b90", "" ] } ] } [/block] Next, in the _Server Groups_ box select _Add Server Group_, where you will use the already deployed server group as a template like so: [block:image] { "images": [ { "image": [ "https://files.readme.io/kUBKsh2RwOT2f2PMieVE_templateselection.png", "templateselection.png", "1027", "318", "#3e76d6", "" ], "caption": "Any server group in this app can be used as a template, and vastly simplifies configuration (since most configuration is copied over). This includes replica sets deployed with \"kubectl create -f <file.yml>\"." } ] } [/block] In the resulting dialog, we only need to make one change down in the _Container_ subsection. Select the image that will come from the Docker trigger as shown below: [block:image] { "images": [ { "image": [ "https://files.readme.io/myFzaIjTxuAqfemPebZQ_configuredynamic.png", "configuredynamic.png", "1006", "986", "#3f566f", "" ] } ] } [/block] Lastly, we want to add a stage to destroy the previous server group in this _dev_ cluster. Select _Add Stage_, and fill out the form as follows: [block:image] { "images": [ { "image": [ "https://files.readme.io/nx0SldzSmuijKB4N8LDA_destroysg.png", "destroysg.png", "1216", "791", "#5a86ae", "" ], "caption": "Make sure to select \"default\" as the namespace, and \"toggle for list of clusters\" to make cluster selection easier. \"Target\" needs to be \"Previous Server Group\", so whatever was previously deployed is deleted after our newly deployed server group is \"Healthy\"." } ] } [/block] #5. Verification Pipeline Back on the _Pipelines_ dialog, create a new pipeline as before, but call it "Manual Judgement". On the first screen, add a Pipeline trigger as shown below: [block:image] { "images": [ { "image": [ "https://files.readme.io/RY6xIdMlR1SXv2C4ED86_judgetrigger.png", "judgetrigger.png", "1221", "671", "#f3542e", "" ] } ] } [/block] We will only add a single stage, which will serve to gate access to the _prod_ environment down the line. The configuration is shown here: [block:image] { "images": [ { "image": [ "https://files.readme.io/1helP7sSUKjmk1UtRJud_manjudge.png", "manjudge.png", "1221", "786", "#5b3a32", "" ] } ] } [/block] Keep in mind, more advanced types of verification can be done here, such as running a Kubernetes batch job to verify that your app is healthy, or calling out to an external Jenkins server. For the sake of simplicity we will keep this as "manual judgement". # 6. Promote to _prod_ Create a new pipeline titled "Deploy to Prod", and configure a pipeline trigger as shown here: [block:image] { "images": [ { "image": [ "https://files.readme.io/X3INVNHjRzqdFETWuqgv_mantrigger.png", "mantrigger.png", "1221", "666", "#f3522c", "" ] } ] } [/block] Now we need to find the deployed image in _dev_ that we previously verified. Add a new stage and configure it as follows: [block:image] { "images": [ { "image": [ "https://files.readme.io/NftKIHtUQkCXfA3oQr9j_fimage.png", "fimage.png", "1221", "831", "#fb3404", "" ], "caption": "Select the \"default\" namespace, the \"serve-dev\" cluster using \"Toggle for list of clusters\", and make sure to select \"Newest\" as the \"Server Group Selection\". \n\n\"Image Name Pattern\" can be used when multiple different images are deployed in a single Kubernetes Pod. Since we only have a single deployed container we can safely use \".*\" as the pattern." } ] } [/block] Now, to deploy that resolved image, add a new stage and configure it as follows: [block:image] { "images": [ { "image": [ "https://files.readme.io/FsfZpwIfQ62bm7G5ctZY_proddeploy1.png", "proddeploy1.png", "1216", "831", "#6a8c92", "" ] } ] } [/block] Select _Add Server Group_, and again use the _dev_ deployment as a template: [block:image] { "images": [ { "image": [ "https://files.readme.io/C0VZMELhTAqqScv0XbaO_templateselection.png", "templateselection.png", "1027", "318", "#3e76d6", "" ] } ] } [/block] This time we need to make three changes to the template. First, change the "stack" to represent our _prod_ cluster: [block:image] { "images": [ { "image": [ "https://files.readme.io/B4zMXkqsSDy4cKRgW9Lc_changestack.png", "changestack.png", "1006", "981", "#c28b76", "" ] } ] } [/block] Next in the load balancers section: [block:image] { "images": [ { "image": [ "https://files.readme.io/vPByp3geSgGgogZkbptg_prodlbse.png", "prodlbse.png", "1006", "986", "#3e5266", "" ], "caption": "We want to attach this server group to the \"prod\" load balancer, so make sure to remove the \"dev\" load balancer with the trash-can icon, and select the \"serve-prod\" load balancer in its place." } ] } [/block] Lastly in the container section: [block:image] { "images": [ { "image": [ "https://files.readme.io/ASBOqj8Tw2nQOwfyTcyQ_fimingres.png", "fimingres.png", "1006", "986", "#3f77d7", "" ] } ] } [/block] Now to prevent all prior versions of this app in production from serving traffic once the deploy finishes, we will add a "Disable Cluster" stage like so: [block:image] { "images": [ { "image": [ "https://files.readme.io/HMjKsHhTTqvL3oDdC5K6_disablecluster.png", "disablecluster.png", "1226", "801", "#537f87", "" ], "caption": "You will need to manualy enter \"serve-prod\" as the cluster name since it doesn't exist yet." } ] } [/block] Save the pipeline, and we are ready to go! # 7. Run the Pipeline Push a new branch to your repo, and wait for the pipeline to run. <code>NEW_VERSION=v1.0.0 git checkout -b $NEW_VERSION git push origin $NEW_VERSION</code> Once the Manual Judgement stage is hit, open http://localhost:7777/api/v1/proxy/namespaces/default/services/serve-dev:80/ to "verify" your deployment, and hit _continue_ once you are ready to promote to _prod_. [block:image] { "images": [ { "image": [ "https://files.readme.io/cv8j0EeQe2y4jqA6metn_manjudge2.png", "manjudge2.png", "1211", "766", "#3d536a", "" ], "caption": "Selecting \"stop\" will cause the deployment to fail, and the next pipeline won't trigger." } ] } [/block] [block:image] { "images": [ { "image": [ "https://files.readme.io/AA5gceLKTnCggsVY6cRx_trigfind.png", "trigfind.png", "1246", "831", "#3f5668", "" ], "caption": "Notice that the \"Find Image\" phase automatically finds the tag that we triggered the first pipeline, as it was the one we verified earlier." } ] } [/block] [block:image] { "images": [ { "image": [ "https://files.readme.io/5POkkeL5QvygBdPA1HGQ_linktocluster.png", "linktocluster.png", "1221", "776", "#3b576b", "" ], "caption": "To verify, check the public facing cluster's service's endpoint circled above." } ] } [/block]