{"__v":44,"_id":"56f2e2f7a0de870e003b6cf0","category":{"__v":3,"_id":"56ce1e6ee538330b0021ac5d","pages":["56ce1ec2f3539413004711ee","56ce1edef3539413004711f1","56ce2072e538330b0021ac62"],"project":"55c6bec1b9aa4e0d0016c2c3","version":"55c6bec1b9aa4e0d0016c2c6","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-02-24T21:19:42.029Z","from_sync":false,"order":1,"slug":"tools","title":"Installation"},"parentDoc":null,"project":"55c6bec1b9aa4e0d0016c2c3","user":"56e1901aa71e9e200066cdf6","version":{"__v":8,"_id":"55c6bec1b9aa4e0d0016c2c6","project":"55c6bec1b9aa4e0d0016c2c3","createdAt":"2015-08-09T02:45:21.683Z","releaseDate":"2015-08-09T02:45:21.683Z","categories":["55c6bec2b9aa4e0d0016c2c7","56c14bc5826df10d00e82230","56cceed8723ad71d00cae46c","56ccf29a431ada1f00e85aae","56ccf3c28fa8b01b00b82018","56ce1e6ee538330b0021ac5d","56f97e9a4c612020008f2eaf","5734fafd146eb82000597261"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"updates":["57041f16473f900e004d5a52","570c23132bf01d2000c0597e","5710ef13a8fa940e00791ad0","5743aea9583f470e000a78ad","57f51e2a9daf6b220030ab25","581cf6392d0f7225009a9bb6","5862d8d491c15f0f009b1b1d","5862d8ffea6587190017e653","5862d900b100980f0032c61d","5862d92ff1c8712300f4db0a","5862d9305eb1be0f008dc7a5","5862d95e7cd0942d004b02d4","5862d998694b700f00c7792c","5862da20e871ab3700477d1c","5862da86491dc51900ee728d","5862da93cb55e91900bef11b","5862da94c53e941900012e9f","588bbfd271e51b3900e106ca","588be58ba7bff22500fb0500","58a316522ae7ed0f00d36fd8","58ac719940540e2500381a37","58b49d680c8aa90f000f918f","58c1b6ea9ad6fd2500e71fb2"],"next":{"pages":[],"description":""},"createdAt":"2016-03-23T18:39:51.717Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":3,"body":"> **Warning**: When writing custom configuration, please create/edit the <code>\\*-local.yml</code> file for the subsystem in question. The files Spinnaker ships with may be replaced during an update, so your edits in those files are not safe. Read more [here](doc:custom-configuration#section-about-the-different-yaml-files).\n> **Note:** Cloud Foundry does not have a blessed turn-key installation solution yet, so for the time being it's necessary follow the steps in the Cloud Foundry section here\n\nOnce you've [setup your deployment environment](doc:target-deployment-setup), and [created a spinnaker instance](doc:creating-a-spinnaker-instance), you may want to further configure Spinnaker. Typically, the default configuration is enough to get you started, but some environments offer further configuration. \n\nAfter Spinnaker is configured, you can test it out by creating a sample [bake and deploy pipeline](doc:bake-and-deploy-pipeline).\n\nBefore reading this section, it's recommended to at least skim the [Custom Configuration](doc:custom-configuration) page, specifically the bit about where to [find the configuration files](doc:custom-configuration#section-types-of-configuration-files).\n\n> **Note:** Almost all of these configuration changes affect the Clouddriver service. Some configuration changes cannot be picked up automatically, so it's usually best to run <code>sudo restart clouddriver</code> if you want to surface any configuration changes.\n\n## Deployment Environments\n\n  * [Amazon Web Services](doc:target-deployment-configuration#section-amazon-web-services)\n  * [Azure](doc:target-deployment-configuration#section-azure)\n  * Cloud Foundry _(Coming soon)_\n  * [Google Cloud Platform](doc:target-deployment-configuration#section-google-cloud-platform)\n  * [Kubernetes](doc:target-deployment-configuration#section-kubernetes)\n  * [Openstack](doc:target-deployment-configuration#section-openstack)\n\n## Image providers\n\n  * [Docker Registry](doc:target-deployment-configuration#section-docker-registry)\n\n### Amazon Web Services\n\n**Basic Configuration**\n\nBoth the [AWS installation](doc:creating-a-spinnaker-instance#section-amazon-web-services) and the [GCP click to deploy](doc:creating-a-spinnaker-instance#section-google-cloud-platform) solution allow you to specify your AWS project and credentials needed by Spinnaker to deploy to AWS.\n\n**Advanced Configuration**\n\n_(Coming soon)_\n\n### Azure\n\nIf you are targeting a [Kubernetes cluster on Azure](https://aka.ms/azspinkubecreate), skip to the [Kubernetes section](doc:target-deployment-configuration#section-kubernetes) to configure Spinnaker. If you are targeting VM Scale Sets on Azure, follow these steps:\n\n1.  You will need data from the Azure section on the [Spinnaker Cloud Provider Setup](http://www.spinnaker.io/v1.0/docs/target-deployment-setup#section-azure-setup) to complete the steps below.\n\n2.  Navigate to the directory by executing the command <code>cd /opt/spinnaker/config</code>\n\n3.  Execute the command **ls** and you should see a file named **spinnaker-local.yml** in the config directory.\n\n4.  We need to edit the spinnaker-local.yml file to add values for a [service principal](https://azure.microsoft.com/en-us/documentation/articles/active-directory-application-objects/). See [vim tutorial](https://linuxconfig.org/vim-tutorial) for more information on using the vim editor.\n\n5.  Run the command <code>sudo vim spinnaker-local.yml</code>\n\n6.  Type **a** to enter append mode in vim\n\n7. Navigate to the Azure section and fill in details like the below. **Ensure you leave a space between the colon and the values you enter**.  These values were the ones you collected while creating a service principal for Azure:\n\n```yml\nazure:\n  enabled:                # SET TO TRUE.\n  defaultRegion:          # ENTER REGION HERE. Example = westus\n  primaryCredentials:\n    name: my-azure-account\n    clientId:             # ENTER YOUR CLIENT ID HERE. You can run azure account show from the Azure CLI and obtain this from the username field\n    appKey:               # ENTER YOUR APP KEY HERE. This was the password you created when creating the Azure service principal for Spinnaker\n    tenantId:             # ENTER YOUR TENANT ID HERE.\n    subscriptionId:       # ENTER YOUR SUBSCRIPTION ID HERE.\n    objectId:              # ENTER THE OBJECT ID HERE. This is the object id of the Azure service principal you created for Spinnaker. This is only required if you plan on deploying Windows-based images.\n    defaultResourceGroup: # ENTER THE DEFAULT RESOURCE GROUP.\n    defaultKeyVault:      # ENTER THE DEFAULT KEY VAULT NAME.\n```\n\n8. Commit your changes with vim by clicking the **Esc** key, then type **:wq (colon wq)** and finally click enter.\n\n9. Execute <code>sudo restart spinnaker</code> to restart the service.\n\n### Google Cloud Platform\n\n**Basic Configuration**\n\nThe [GCP click to deploy](doc:creating-a-spinnaker-instance#section-google-cloud-platform) solution will configure Spinnaker to be able to deploy to the project the instance running Spinnaker exists in. If you want to tweak another installation to deploy to GCP, pick your favorite editor and open <code>/opt/spinnaker/config/spinnaker-local.yml</code>. Under the <code>providers</code> section, you'll find the <code>google</code> subsection, which we've annotated with comments:\n  \n    google:\n      enabled:             # set to true to deploy to GCP\n      defaultRegion:       # default region to target for deployments\n      defaultZone:         # default zone to target for deployments\n      primaryCredentials:  \n        name:              # used to identify this set of credentials later\n        project:           # project ID of the project you want to manage\n        jsonPath:          # path to the .json service account key for the above project;\n                           # must be owned by Spinnaker\n\nOnce you made the necessary changes (probably updating, <code>enabled</code>, <code>project</code>, and <code>jsonPath</code>), restart Clouddriver with <code>sudo restart clouddriver</code>.\n\n**Advanced Configuration**\n\nTo configure Spinnaker to deploy to multiple GCP projects, append to the <code>google</code> block in <code>/opt/spinnaker/config/spinnaker-local.yml</code>:\n\n```yml\ngoogle:\n  # ...\n  # previous config\n  # ...\n  accounts:\n    - name:                # name of Spinnaker account for project 1\n      project:             # name of GCP project 1\n      jsonPath:            # path to json key for service account in project 1\n    - name:                # name of Spinnaker account for project 2\n      project:             # name of GCP project 2\n      jsonPath:            # path to json key for service account in project 2\n```\n\nTo allow an account to index and provision images from other GCP projects, specify the list of image project ids as a yaml list with the key `imageProjects`. For example:\n\n```yml\ngoogle:\n  # ...\n  # previous config\n  # ...\n  accounts:\n    - name:                # name of Spinnaker account for project 1\n      project:             # name of GCP project 1\n      jsonPath:            # path to json key for service account in project 1\n      imageProjects:\n        - my-image-project-1\n        - my-image-project-2\n```\n\nEach project must have granted the IAM role `compute.imageUser` to the service account associated with the json key referenced above, as well as to the 'Google APIs service account' automatically created for the project being managed (should look similar to `12345678912:::at:::cloudservices.gserviceaccount.com`). You can read more about sharing images across GCP projects [here](https://cloud.google.com/compute/docs/images/sharing-images-across-projects).\n\nThis is one location in the config hierarchy where this list can be specified: [clouddriver config](https://github.com/spinnaker/spinnaker/blob/master/config/clouddriver.yml#L60).\n\n### Kubernetes\n\nSince Kubernetes deploys containers built outside of Spinnaker, it needs to know where to find the images you want to deploy. Therefore, we need to enable the Docker Registry provider, which is explained below, and in much more detail [here](doc:target-deployment-configuration#section-docker-registry).\n\n**Basic Configuration**\n\nIf you're running Spinnaker inside a Kubernetes cluster configured [here](doc:kubernetes), it is already configured to manage Kubernetes. If you are running Spinnaker on a VM, keep reading.\n\nFirst make sure Spinnaker is up-to-date, as many of these features are only available in more recent builds (late Q1 2016):\n\n    $ sudo apt-get update\n    $ sudo apt-get upgrade spinnaker*\n    \nNow, make sure that your kubeconfig file, described in the [Kubernetes setup](doc:target-deployment-setup#section-kubernetes-cluster-setup), is placed at <code>/home/spinnaker/.kube/config</code>. Next, pick your favorite editor and open <code>/opt/spinnaker/config/spinnaker-local.yml</code>. Under the <code>providers</code> section, you'll find the <code>kubernetes</code> and <code>dockerRegistry</code> subsections, which we've annotated with comments:\n\n    kubernetes:\n      enabled:                  # set to true to deploy to kubernetes\n      primaryCredentials:\n        name:                   # used to identify this set of credentials later\n        dockerRegistryAccount:  # must match the name field of the docker registry credentials below\n\n    dockerRegistry:\n      enabled:                  # set to true to deploy to kubernetes\n      primaryCredentials:\n        name:                   # used to identify this set of credentials later\n        address:                # baseUrl of the registry api\n        repository:             # library name of the repository you are deploying\n    \nYou'll need to enable both providers, and set sensible values for the other fields (which are likely already filled). Finally, restart Clouddriver with <code>sudo restart clouddriver</code>.\n\n**Advanced Configuration**\n\nThere is much to configure with the Kubernetes provider, if you so choose. This is done inside your <code>/opt/spinnaker/config/clouddriver-local.yml</code> file, where you can set fields under the <code>providers</code> section as follows:\n\n\n    kubernetes:\n      enabled:               # boolean indicating whether or not to use kubernetes as a provider\n      accounts:              # list of kubernetes accounts\n        - name:              # required unique name for this account\n          kubeconfigFile:    # optional location of the kube config file\n          serviceAccount:    # set to 'true' if this is deployed inside a kubernetes cluster,\n                             # and you want to use the deployed pod's service account\n          namespaces:        # optional list of namespaces to manage\n          context:           # optional context in kubeconfig to use as a user/cluster pair\n          dockerRegistries:  # required (at least 1) docker registry accounts used as a source of images\n            - accountName:   # required name of the docker registry account\n              namespaces:    # optional list of namespaces this docker registry can deploy to\n\nBy default, the kubeconfig file at <code>~/.kube/config</code> is used, unless the field <code>kubeconfigFile</code> is specified. The context field is derived from the <code>current-context</code> field in the kubeconfig file, unless it is explicitly provided. If no namespace is found then all existing namespaces will be used, and periodically refreshed. Any referenced namespaces that do not exist will be created. Each Spinnaker Kubernetes account is linked to a specific Kubernetes context. If you want to deploy to multiple Kubernetes clusters you'll need to explicitly list which context each account is linked to:\n\n    kubernetes:\n      enabled: true\n      accounts:\n        - name: dev-cluster\n          context: dev-context # found in ~/.kube/config\n          dockerRegistries:\n            - accountName: my-docker-account # configured under dockerRegistry section\n        - name: test-cluster\n          context: test-context # found in ~/.kube/config\n          dockerRegistries:\n            - accountName: my-docker-account # configured under dockerRegistry section\n\nThe Docker Registry accounts referred to by the above configuration are also configured inside Clouddriver. The details of that implementation can be found [here](https://github.com/spinnaker/spinnaker/wiki/Docker-Registry-Implementation). The Docker authentication details (username, password, email, endpoint address), are read from each listed Docker Registry account, and configured as an [image pull secret](http://kubernetes.io/v1.1/docs/user-guide/images.html#specifying-imagepullsecrets-on-a-pod). The <code>namespaces</code> field of the <code>dockerRegistry</code> subblock defaults to the full list of namespaces, and is used by the Kubernetes provider to determine which namespaces to register the image pull secrets with. Every created pod is given the full list of image pull secrets available to its containing namespace.\n\nThe Kubernetes provider will periodically (every 30 seconds) attempt to fetch every provided namespace to see if the cluster is still reachable.\n\n### Docker Registry\n\n**Basic Configuration**\n\nPick your favorite editor and open <code>/opt/spinnaker/config/spinnaker-local.yml</code>. Under the <code>providers</code> section, you'll find the <code>dockerRegistry</code> subsection, which we've annotated with comments:\n\n    dockerRegistry:\n      enabled:                # set to true to deploy to kubernetes\n      primaryCredentials:\n        name:                 # used to identify this set of credentials later\n        address:              # baseUrl of the registry api\n        repository:           # library name of the repository you are deploying\n    \nThe address field depends on your choice of provider:\n\nDockerHub: https://index.docker.io\nGoogle Container Registry: https://gcr.io, https://us.gcr.io, https://eu.gcr.io, https://asia.gcr.io\nQuay: https://quay.io\nAzure Container Registry: https://azurecr.io\n\n**Advanced Configuration**\n\nThere is much to configure with the Docker registry provider, if you so choose. This is done inside your <code>/opt/spinnaker/config/clouddriver-local.yml</code> file, where you can set fields at the top level as follows:\n\n    dockerRegistry:\n      enabled:          # boolean indicating whether or not to use docker registries as a provider\n      accounts:         # list of docker registry accounts\n        - name:         # required unique name for this account\n          address:      # required address of the registry. e.g. https://index.docker.io\n          username:     # optional username for authenticating with the registry\n          password:     # optional password for authenticating with the registry\n          passwordFile: # optional fully-qualified path to a file containing the registry password\n          email:        # optional email for authenticating with the registry\n          repositories: # optional list of registries. if none configured, registry must support `/_catalog` endpoint\n          cacheThreads: # optional (default is 1) number of threads to cache registry contents across\n          clientTimeoutMillis: # optional (default is 1 minute) time before the registry connection times out\n          paginateSize: # optional (default is 100) number of entries to request from /_catalog at a time\n\nExample:\n\n    dockerRegistry:\n      enabled: true\n      accounts:\n        - name: my-docker-registry-account\n          address: https://index.docker.io\n          username: myusernamehere\n          password: mypasswordhere\n          email: myuseramehere@domain.com\n          repositories:\n             - library/nginx\n     \nClouddriver authenticates itself following the [official v2 Docker registry specification](https://docs.docker.com/registry/spec/auth/token/). The motivation for storing the credentials here, rather than loading them from <code>~/.docker/config</code>, was to allow the user to segregate accounts with access to the same Registry, but with different sets of permissions.\n\nAdditionally, in order for Docker Registry changes to be utilized as a trigger in a Spinnaker pipeline, ensure the following are available inside your <code>/opt/spinnaker/config/igor-local.yml</code> file:\n\n    dockerRegistry:\n      enabled: true\n    \n### Openstack\n\nThe Spinnaker Openstack driver is developed and tested against the Openstack Mitaka release.\nDue to the limitless ways to configure Openstack and its services,\nhere is a list of API versions that are required to be enabled:\n * Keystone (Identity) v3\n * Compute v2\n * LBaaS v2\n * Networking v2\n * Orchestration (Heat)\n * Ceilometer\n * Glance v2\n\n#### Clouddriver Configuration\n\n**Basic Configuration**\n\nTo configure Spinnaker to manage Openstack, pick your favorite editor and open <code>/opt/spinnaker/config/spinnaker-local.yml</code>.\nUnder the <code>provider</code> section, you'll find the <code>openstack</code> subsection, which we've annotated with comments:\n\n````yml\n  openstack:\n    enabled:            # set to true to deploy to Openstack\n    primaryCredentials:\n      name:             # used to identify this set of credentials later\n      authUrl:          # URL of the Keystone v3 API e.g. https://openstack.example.com:5000/v3\n      username:         # username to connect to Openstack\n      password:         # password for the user\n      projectName:      # name of the project to manage\n      domainName:       # name of the domain to manage\n      regions:          # comma separated list of regions manage, may be a subset of all available regions\n      insecure:         # skip SSL verification when connecting to Openstack, defaults to false\n````\n\nThe example `spinnaker-local.yml` file `default-spinnaker-local.yml` will read all these options from environment variables if that is how you prefer to configure your environment.\n\nThe `insecure` option is needed if the Openstack server is self-signed certificate, such as a Devstack setup.\nIt is not recommended to set `insecure` to `true` in production.\n\nOnce you made the necessary changes, restart Clouddriver with <code>sudo restart clouddriver</code>.\n\n**Advanced Configuration**\n\n***Load Balancer Timeouts***\n\nIf managing load balancers in your Openstack environment through Spinnaker is failing due to timeouts, you may need to increase the timeout and polling interval.\nThis is more likely to occur in a resource constrained Openstack environment such as Devstack or another smaller test environment.\nOpenstack requires that the load balancer be in an ACTIVE state for it to create associated relationships (i.e. listeners, pools, monitors).\nEach modification will cause the load balancer to go into a PENDING state and back to ACTIVE once the change has been made.\nSpinnaker needs to poll Openstack, blocking further load balancer operations until the status returns to ACTIVE.\n\nTo do this, update the <code>openstack.accounts</code> block in <code>/opt/spinnaker/config/clouddriver-local.yml</code>\n\n````yml\n  openstack:\n    #...\n    accounts:\n      - name:\n        authUrl:\n        #...\n        lbaas:\n          pollTimeout: 60   # timeout in seconds for waiting for a load balancer status to return to ACTIVE\n          pollInterval: 5   # interval in seconds in which to poll Openstack for the status of a load balancer\n````\n\n***Multiple OpenStack Accounts***\n\nTo configure Spinnaker to deploy to multiple OpenStack accounts or projects, append to the <code>openstack.accounts</code> block in <code>/opt/spinnaker/config/clouddriver-local.yml</code>\n\n````yml\n  openstack:\n    #...\n    accounts:\n      - name:         # name of Spinnaker account for Openstack account or project 1\n        authUrl:      # URL of the Keystone v3 API for Openstack account or project 1\n        username:     # username to connect to Openstack account or project 1\n        password:     # password for the user for Openstack account or project 1\n        projectName:  # name of the project to manage for Openstack account or project 1\n        domainName:   # name of the domain to manage for Openstack account or project 1\n        regions:      # comma separated list of regions manage for Openstack account or project 1\n        lbaas:        # load balancer configuration for Openstack account or project 1\n          pollTimeout:\n          pollInterval:\n      - name:         # name of Spinnaker account for Openstack account or project 2\n        authUrl:      # URL of the Keystone v3 API for Openstack account or project 2\n        username:     # username to connect to Openstack account or project 2\n        password:     # password for the user for Openstack account or project 2\n        projectName:  # name of the project to manage for Openstack account or project 2\n        domainName:   # name of the domain to manage for Openstack account or project 2\n        regions:      # comma separated list of regions manage for Openstack account or project 2\n        lbaas:        # load balancer configuration for Openstack account or project 2\n          pollTimeout:\n          pollInterval:\n````\n\n***Common Userdata***\n\nThe OpenStack driver supports the ability to inject common userdata into every launched instance. \nThis is handled via a template file that is located on the Clouddriver server.\nThis template file is token replaced to provide some specifics about the deployment.\n\nThe location of the template file is controlled by the `providers.openstack.primaryCredentials.userDataFile` property in `spinnaker-local.yml`.\n\nThe list of replacement tokens is:\n\n\ntoken             | description\n------------------|------------\n`%%account%%`     | the name of the account\n`%%accounttype%%` | the accountType of the account\n`%%env%%`         | the environment of the account\n`%%region%%`      | the deployment region\n`%%group%%`       | the name of the server group\n`%%autogrp%%`     | the name of the server group\n`%%cluster%%`     | the name of the cluster\n`%%stack%%`       | the stack component of the cluster name\n`%%detail%%`      | the detail component of the cluster name\n`%%launchconfig%%`| the name of the launch configuration\n\nTypical usage would be replacing these values into a list of environment variables, and using those variables to customize behavior based on the account/env/stack/etc.\n\nExample  template file:\n\n````bash\n#!/bin/bash\nCLOUD_ACCOUNT=\"%%account%%\"\nCLOUD_ACCOUNT_TYPE=\"%%accounttype%%\"\nCLOUD_ENVIRONMENT=\"%%env%%\"\nCLOUD_SERVER_GROUP=\"%%group%%\"\nCLOUD_CLUSTER=\"%%cluster%%\"\nCLOUD_STACK=\"%%stack%%\"\nCLOUD_DETAIL=\"%%detail%%\"\nCLOUD_REGION=\"%%region%%\"\n````\n\nIf the server group `udf-example-cluster-v001` was deployed using this template in the account `main`, accountType `streaming`, environment `prod`, in the `east` region, the resulting user data would look like:\n\n````bash\n#!/bin/bash\nCLOUD_ACCOUNT=\"main\"\nCLOUD_ACCOUNT_TYPE=\"streaming\"\nCLOUD_ENVIRONMENT=\"prod\"\nCLOUD_SERVER_GROUP=\"udf-example-cluster-v001\"\nCLOUD_CLUSTER=\"udf-example-cluster\"\nCLOUD_STACK=\"example\"\nCLOUD_DETAIL=\"cluster\"\nCLOUD_REGION=\"east\"\n````\n\n#### Rosco Configuration\n\nTo configure Spinnaker to bake Openstack images, pick your favorite text editor and create or open <code>/opt/spinnaker/rosco-local.yml</code>.\nAdd or update the <code>openstack</code> section.\n\n````yml\n  openstack:\n    enabled:            # enables ability to bake Openstack images\n    bakeryDefaults:\n      username:         # username to authenticate to Openstack with\n      password:         # uassword for the user\n      authUrl:          # URL to the Openstack identity service\n      domainName:       # domain name you are authenticating with\n      networkId:        # network UUID to attach to instance while baking the image\n      floatingIpPool:   # Name of the floating IP pool to use to allocate a floating IP\n      securityGroups:   # List of security groups by name to add to the instance while baking the image\n      projectName:      # project name you are authenticating with\n      insecure:         # skip SSL verification when connecting to Openstack, defaults to false\n      templateFile:     # packer template file, defaults to openstack.json\n      baseImages:       # list of base images that can be used to bake an image\n      - baseImage:\n          id: vivid             \n          shortDescription: 15.04\n          detailedDescription: Ubuntu Vivid Vervet v15.04\n          packageType: deb      # type of packages to be installed, either deb or rpm\n        virtualizationSettings: # list of different regions with virtualization settings for the base image\n        - region:               # A region to bake the image in\n          instanceType:         # The instance type to use for baking the image e.g. smem-2vcpu, mmem-2vcpu\n          sourceImageId:        # a UUID of the source image to use when baking the image\n          sshUserName: ubuntu   # username for sshing into the instance for baking the image\n        - region:               # Another region or/and virtualization settings to choose\n          instanceType:         \n          sourceImageId:\n          sshUserName:\n````","excerpt":"","slug":"target-deployment-configuration","type":"basic","title":"Configuring Spinnaker"}

Configuring Spinnaker


> **Warning**: When writing custom configuration, please create/edit the <code>\*-local.yml</code> file for the subsystem in question. The files Spinnaker ships with may be replaced during an update, so your edits in those files are not safe. Read more [here](doc:custom-configuration#section-about-the-different-yaml-files). > **Note:** Cloud Foundry does not have a blessed turn-key installation solution yet, so for the time being it's necessary follow the steps in the Cloud Foundry section here Once you've [setup your deployment environment](doc:target-deployment-setup), and [created a spinnaker instance](doc:creating-a-spinnaker-instance), you may want to further configure Spinnaker. Typically, the default configuration is enough to get you started, but some environments offer further configuration. After Spinnaker is configured, you can test it out by creating a sample [bake and deploy pipeline](doc:bake-and-deploy-pipeline). Before reading this section, it's recommended to at least skim the [Custom Configuration](doc:custom-configuration) page, specifically the bit about where to [find the configuration files](doc:custom-configuration#section-types-of-configuration-files). > **Note:** Almost all of these configuration changes affect the Clouddriver service. Some configuration changes cannot be picked up automatically, so it's usually best to run <code>sudo restart clouddriver</code> if you want to surface any configuration changes. ## Deployment Environments * [Amazon Web Services](doc:target-deployment-configuration#section-amazon-web-services) * [Azure](doc:target-deployment-configuration#section-azure) * Cloud Foundry _(Coming soon)_ * [Google Cloud Platform](doc:target-deployment-configuration#section-google-cloud-platform) * [Kubernetes](doc:target-deployment-configuration#section-kubernetes) * [Openstack](doc:target-deployment-configuration#section-openstack) ## Image providers * [Docker Registry](doc:target-deployment-configuration#section-docker-registry) ### Amazon Web Services **Basic Configuration** Both the [AWS installation](doc:creating-a-spinnaker-instance#section-amazon-web-services) and the [GCP click to deploy](doc:creating-a-spinnaker-instance#section-google-cloud-platform) solution allow you to specify your AWS project and credentials needed by Spinnaker to deploy to AWS. **Advanced Configuration** _(Coming soon)_ ### Azure If you are targeting a [Kubernetes cluster on Azure](https://aka.ms/azspinkubecreate), skip to the [Kubernetes section](doc:target-deployment-configuration#section-kubernetes) to configure Spinnaker. If you are targeting VM Scale Sets on Azure, follow these steps: 1. You will need data from the Azure section on the [Spinnaker Cloud Provider Setup](http://www.spinnaker.io/v1.0/docs/target-deployment-setup#section-azure-setup) to complete the steps below. 2. Navigate to the directory by executing the command <code>cd /opt/spinnaker/config</code> 3. Execute the command **ls** and you should see a file named **spinnaker-local.yml** in the config directory. 4. We need to edit the spinnaker-local.yml file to add values for a [service principal](https://azure.microsoft.com/en-us/documentation/articles/active-directory-application-objects/). See [vim tutorial](https://linuxconfig.org/vim-tutorial) for more information on using the vim editor. 5. Run the command <code>sudo vim spinnaker-local.yml</code> 6. Type **a** to enter append mode in vim 7. Navigate to the Azure section and fill in details like the below. **Ensure you leave a space between the colon and the values you enter**. These values were the ones you collected while creating a service principal for Azure: ```yml azure: enabled: # SET TO TRUE. defaultRegion: # ENTER REGION HERE. Example = westus primaryCredentials: name: my-azure-account clientId: # ENTER YOUR CLIENT ID HERE. You can run azure account show from the Azure CLI and obtain this from the username field appKey: # ENTER YOUR APP KEY HERE. This was the password you created when creating the Azure service principal for Spinnaker tenantId: # ENTER YOUR TENANT ID HERE. subscriptionId: # ENTER YOUR SUBSCRIPTION ID HERE. objectId: # ENTER THE OBJECT ID HERE. This is the object id of the Azure service principal you created for Spinnaker. This is only required if you plan on deploying Windows-based images. defaultResourceGroup: # ENTER THE DEFAULT RESOURCE GROUP. defaultKeyVault: # ENTER THE DEFAULT KEY VAULT NAME. ``` 8. Commit your changes with vim by clicking the **Esc** key, then type **:wq (colon wq)** and finally click enter. 9. Execute <code>sudo restart spinnaker</code> to restart the service. ### Google Cloud Platform **Basic Configuration** The [GCP click to deploy](doc:creating-a-spinnaker-instance#section-google-cloud-platform) solution will configure Spinnaker to be able to deploy to the project the instance running Spinnaker exists in. If you want to tweak another installation to deploy to GCP, pick your favorite editor and open <code>/opt/spinnaker/config/spinnaker-local.yml</code>. Under the <code>providers</code> section, you'll find the <code>google</code> subsection, which we've annotated with comments: google: enabled: # set to true to deploy to GCP defaultRegion: # default region to target for deployments defaultZone: # default zone to target for deployments primaryCredentials: name: # used to identify this set of credentials later project: # project ID of the project you want to manage jsonPath: # path to the .json service account key for the above project; # must be owned by Spinnaker Once you made the necessary changes (probably updating, <code>enabled</code>, <code>project</code>, and <code>jsonPath</code>), restart Clouddriver with <code>sudo restart clouddriver</code>. **Advanced Configuration** To configure Spinnaker to deploy to multiple GCP projects, append to the <code>google</code> block in <code>/opt/spinnaker/config/spinnaker-local.yml</code>: ```yml google: # ... # previous config # ... accounts: - name: # name of Spinnaker account for project 1 project: # name of GCP project 1 jsonPath: # path to json key for service account in project 1 - name: # name of Spinnaker account for project 2 project: # name of GCP project 2 jsonPath: # path to json key for service account in project 2 ``` To allow an account to index and provision images from other GCP projects, specify the list of image project ids as a yaml list with the key `imageProjects`. For example: ```yml google: # ... # previous config # ... accounts: - name: # name of Spinnaker account for project 1 project: # name of GCP project 1 jsonPath: # path to json key for service account in project 1 imageProjects: - my-image-project-1 - my-image-project-2 ``` Each project must have granted the IAM role `compute.imageUser` to the service account associated with the json key referenced above, as well as to the 'Google APIs service account' automatically created for the project being managed (should look similar to `12345678912@cloudservices.gserviceaccount.com`). You can read more about sharing images across GCP projects [here](https://cloud.google.com/compute/docs/images/sharing-images-across-projects). This is one location in the config hierarchy where this list can be specified: [clouddriver config](https://github.com/spinnaker/spinnaker/blob/master/config/clouddriver.yml#L60). ### Kubernetes Since Kubernetes deploys containers built outside of Spinnaker, it needs to know where to find the images you want to deploy. Therefore, we need to enable the Docker Registry provider, which is explained below, and in much more detail [here](doc:target-deployment-configuration#section-docker-registry). **Basic Configuration** If you're running Spinnaker inside a Kubernetes cluster configured [here](doc:kubernetes), it is already configured to manage Kubernetes. If you are running Spinnaker on a VM, keep reading. First make sure Spinnaker is up-to-date, as many of these features are only available in more recent builds (late Q1 2016): $ sudo apt-get update $ sudo apt-get upgrade spinnaker* Now, make sure that your kubeconfig file, described in the [Kubernetes setup](doc:target-deployment-setup#section-kubernetes-cluster-setup), is placed at <code>/home/spinnaker/.kube/config</code>. Next, pick your favorite editor and open <code>/opt/spinnaker/config/spinnaker-local.yml</code>. Under the <code>providers</code> section, you'll find the <code>kubernetes</code> and <code>dockerRegistry</code> subsections, which we've annotated with comments: kubernetes: enabled: # set to true to deploy to kubernetes primaryCredentials: name: # used to identify this set of credentials later dockerRegistryAccount: # must match the name field of the docker registry credentials below dockerRegistry: enabled: # set to true to deploy to kubernetes primaryCredentials: name: # used to identify this set of credentials later address: # baseUrl of the registry api repository: # library name of the repository you are deploying You'll need to enable both providers, and set sensible values for the other fields (which are likely already filled). Finally, restart Clouddriver with <code>sudo restart clouddriver</code>. **Advanced Configuration** There is much to configure with the Kubernetes provider, if you so choose. This is done inside your <code>/opt/spinnaker/config/clouddriver-local.yml</code> file, where you can set fields under the <code>providers</code> section as follows: kubernetes: enabled: # boolean indicating whether or not to use kubernetes as a provider accounts: # list of kubernetes accounts - name: # required unique name for this account kubeconfigFile: # optional location of the kube config file serviceAccount: # set to 'true' if this is deployed inside a kubernetes cluster, # and you want to use the deployed pod's service account namespaces: # optional list of namespaces to manage context: # optional context in kubeconfig to use as a user/cluster pair dockerRegistries: # required (at least 1) docker registry accounts used as a source of images - accountName: # required name of the docker registry account namespaces: # optional list of namespaces this docker registry can deploy to By default, the kubeconfig file at <code>~/.kube/config</code> is used, unless the field <code>kubeconfigFile</code> is specified. The context field is derived from the <code>current-context</code> field in the kubeconfig file, unless it is explicitly provided. If no namespace is found then all existing namespaces will be used, and periodically refreshed. Any referenced namespaces that do not exist will be created. Each Spinnaker Kubernetes account is linked to a specific Kubernetes context. If you want to deploy to multiple Kubernetes clusters you'll need to explicitly list which context each account is linked to: kubernetes: enabled: true accounts: - name: dev-cluster context: dev-context # found in ~/.kube/config dockerRegistries: - accountName: my-docker-account # configured under dockerRegistry section - name: test-cluster context: test-context # found in ~/.kube/config dockerRegistries: - accountName: my-docker-account # configured under dockerRegistry section The Docker Registry accounts referred to by the above configuration are also configured inside Clouddriver. The details of that implementation can be found [here](https://github.com/spinnaker/spinnaker/wiki/Docker-Registry-Implementation). The Docker authentication details (username, password, email, endpoint address), are read from each listed Docker Registry account, and configured as an [image pull secret](http://kubernetes.io/v1.1/docs/user-guide/images.html#specifying-imagepullsecrets-on-a-pod). The <code>namespaces</code> field of the <code>dockerRegistry</code> subblock defaults to the full list of namespaces, and is used by the Kubernetes provider to determine which namespaces to register the image pull secrets with. Every created pod is given the full list of image pull secrets available to its containing namespace. The Kubernetes provider will periodically (every 30 seconds) attempt to fetch every provided namespace to see if the cluster is still reachable. ### Docker Registry **Basic Configuration** Pick your favorite editor and open <code>/opt/spinnaker/config/spinnaker-local.yml</code>. Under the <code>providers</code> section, you'll find the <code>dockerRegistry</code> subsection, which we've annotated with comments: dockerRegistry: enabled: # set to true to deploy to kubernetes primaryCredentials: name: # used to identify this set of credentials later address: # baseUrl of the registry api repository: # library name of the repository you are deploying The address field depends on your choice of provider: DockerHub: https://index.docker.io Google Container Registry: https://gcr.io, https://us.gcr.io, https://eu.gcr.io, https://asia.gcr.io Quay: https://quay.io Azure Container Registry: https://azurecr.io **Advanced Configuration** There is much to configure with the Docker registry provider, if you so choose. This is done inside your <code>/opt/spinnaker/config/clouddriver-local.yml</code> file, where you can set fields at the top level as follows: dockerRegistry: enabled: # boolean indicating whether or not to use docker registries as a provider accounts: # list of docker registry accounts - name: # required unique name for this account address: # required address of the registry. e.g. https://index.docker.io username: # optional username for authenticating with the registry password: # optional password for authenticating with the registry passwordFile: # optional fully-qualified path to a file containing the registry password email: # optional email for authenticating with the registry repositories: # optional list of registries. if none configured, registry must support `/_catalog` endpoint cacheThreads: # optional (default is 1) number of threads to cache registry contents across clientTimeoutMillis: # optional (default is 1 minute) time before the registry connection times out paginateSize: # optional (default is 100) number of entries to request from /_catalog at a time Example: dockerRegistry: enabled: true accounts: - name: my-docker-registry-account address: https://index.docker.io username: myusernamehere password: mypasswordhere email: myuseramehere@domain.com repositories: - library/nginx Clouddriver authenticates itself following the [official v2 Docker registry specification](https://docs.docker.com/registry/spec/auth/token/). The motivation for storing the credentials here, rather than loading them from <code>~/.docker/config</code>, was to allow the user to segregate accounts with access to the same Registry, but with different sets of permissions. Additionally, in order for Docker Registry changes to be utilized as a trigger in a Spinnaker pipeline, ensure the following are available inside your <code>/opt/spinnaker/config/igor-local.yml</code> file: dockerRegistry: enabled: true ### Openstack The Spinnaker Openstack driver is developed and tested against the Openstack Mitaka release. Due to the limitless ways to configure Openstack and its services, here is a list of API versions that are required to be enabled: * Keystone (Identity) v3 * Compute v2 * LBaaS v2 * Networking v2 * Orchestration (Heat) * Ceilometer * Glance v2 #### Clouddriver Configuration **Basic Configuration** To configure Spinnaker to manage Openstack, pick your favorite editor and open <code>/opt/spinnaker/config/spinnaker-local.yml</code>. Under the <code>provider</code> section, you'll find the <code>openstack</code> subsection, which we've annotated with comments: ````yml openstack: enabled: # set to true to deploy to Openstack primaryCredentials: name: # used to identify this set of credentials later authUrl: # URL of the Keystone v3 API e.g. https://openstack.example.com:5000/v3 username: # username to connect to Openstack password: # password for the user projectName: # name of the project to manage domainName: # name of the domain to manage regions: # comma separated list of regions manage, may be a subset of all available regions insecure: # skip SSL verification when connecting to Openstack, defaults to false ```` The example `spinnaker-local.yml` file `default-spinnaker-local.yml` will read all these options from environment variables if that is how you prefer to configure your environment. The `insecure` option is needed if the Openstack server is self-signed certificate, such as a Devstack setup. It is not recommended to set `insecure` to `true` in production. Once you made the necessary changes, restart Clouddriver with <code>sudo restart clouddriver</code>. **Advanced Configuration** ***Load Balancer Timeouts*** If managing load balancers in your Openstack environment through Spinnaker is failing due to timeouts, you may need to increase the timeout and polling interval. This is more likely to occur in a resource constrained Openstack environment such as Devstack or another smaller test environment. Openstack requires that the load balancer be in an ACTIVE state for it to create associated relationships (i.e. listeners, pools, monitors). Each modification will cause the load balancer to go into a PENDING state and back to ACTIVE once the change has been made. Spinnaker needs to poll Openstack, blocking further load balancer operations until the status returns to ACTIVE. To do this, update the <code>openstack.accounts</code> block in <code>/opt/spinnaker/config/clouddriver-local.yml</code> ````yml openstack: #... accounts: - name: authUrl: #... lbaas: pollTimeout: 60 # timeout in seconds for waiting for a load balancer status to return to ACTIVE pollInterval: 5 # interval in seconds in which to poll Openstack for the status of a load balancer ```` ***Multiple OpenStack Accounts*** To configure Spinnaker to deploy to multiple OpenStack accounts or projects, append to the <code>openstack.accounts</code> block in <code>/opt/spinnaker/config/clouddriver-local.yml</code> ````yml openstack: #... accounts: - name: # name of Spinnaker account for Openstack account or project 1 authUrl: # URL of the Keystone v3 API for Openstack account or project 1 username: # username to connect to Openstack account or project 1 password: # password for the user for Openstack account or project 1 projectName: # name of the project to manage for Openstack account or project 1 domainName: # name of the domain to manage for Openstack account or project 1 regions: # comma separated list of regions manage for Openstack account or project 1 lbaas: # load balancer configuration for Openstack account or project 1 pollTimeout: pollInterval: - name: # name of Spinnaker account for Openstack account or project 2 authUrl: # URL of the Keystone v3 API for Openstack account or project 2 username: # username to connect to Openstack account or project 2 password: # password for the user for Openstack account or project 2 projectName: # name of the project to manage for Openstack account or project 2 domainName: # name of the domain to manage for Openstack account or project 2 regions: # comma separated list of regions manage for Openstack account or project 2 lbaas: # load balancer configuration for Openstack account or project 2 pollTimeout: pollInterval: ```` ***Common Userdata*** The OpenStack driver supports the ability to inject common userdata into every launched instance. This is handled via a template file that is located on the Clouddriver server. This template file is token replaced to provide some specifics about the deployment. The location of the template file is controlled by the `providers.openstack.primaryCredentials.userDataFile` property in `spinnaker-local.yml`. The list of replacement tokens is: token | description ------------------|------------ `%%account%%` | the name of the account `%%accounttype%%` | the accountType of the account `%%env%%` | the environment of the account `%%region%%` | the deployment region `%%group%%` | the name of the server group `%%autogrp%%` | the name of the server group `%%cluster%%` | the name of the cluster `%%stack%%` | the stack component of the cluster name `%%detail%%` | the detail component of the cluster name `%%launchconfig%%`| the name of the launch configuration Typical usage would be replacing these values into a list of environment variables, and using those variables to customize behavior based on the account/env/stack/etc. Example template file: ````bash #!/bin/bash CLOUD_ACCOUNT="%%account%%" CLOUD_ACCOUNT_TYPE="%%accounttype%%" CLOUD_ENVIRONMENT="%%env%%" CLOUD_SERVER_GROUP="%%group%%" CLOUD_CLUSTER="%%cluster%%" CLOUD_STACK="%%stack%%" CLOUD_DETAIL="%%detail%%" CLOUD_REGION="%%region%%" ```` If the server group `udf-example-cluster-v001` was deployed using this template in the account `main`, accountType `streaming`, environment `prod`, in the `east` region, the resulting user data would look like: ````bash #!/bin/bash CLOUD_ACCOUNT="main" CLOUD_ACCOUNT_TYPE="streaming" CLOUD_ENVIRONMENT="prod" CLOUD_SERVER_GROUP="udf-example-cluster-v001" CLOUD_CLUSTER="udf-example-cluster" CLOUD_STACK="example" CLOUD_DETAIL="cluster" CLOUD_REGION="east" ```` #### Rosco Configuration To configure Spinnaker to bake Openstack images, pick your favorite text editor and create or open <code>/opt/spinnaker/rosco-local.yml</code>. Add or update the <code>openstack</code> section. ````yml openstack: enabled: # enables ability to bake Openstack images bakeryDefaults: username: # username to authenticate to Openstack with password: # uassword for the user authUrl: # URL to the Openstack identity service domainName: # domain name you are authenticating with networkId: # network UUID to attach to instance while baking the image floatingIpPool: # Name of the floating IP pool to use to allocate a floating IP securityGroups: # List of security groups by name to add to the instance while baking the image projectName: # project name you are authenticating with insecure: # skip SSL verification when connecting to Openstack, defaults to false templateFile: # packer template file, defaults to openstack.json baseImages: # list of base images that can be used to bake an image - baseImage: id: vivid shortDescription: 15.04 detailedDescription: Ubuntu Vivid Vervet v15.04 packageType: deb # type of packages to be installed, either deb or rpm virtualizationSettings: # list of different regions with virtualization settings for the base image - region: # A region to bake the image in instanceType: # The instance type to use for baking the image e.g. smem-2vcpu, mmem-2vcpu sourceImageId: # a UUID of the source image to use when baking the image sshUserName: ubuntu # username for sshing into the instance for baking the image - region: # Another region or/and virtualization settings to choose instanceType: sourceImageId: sshUserName: ````