The contents of this page refer to alpha features in Spinnaker 1.6.
This means we are working on their stability and usability, as well as possibly adding or changing features. Expect rough edges, and file issues as needed.
The Spinnaker Kubernetes V2 provider fully supports manifest-based deployments. Kubernetes provider V1 is still supported.
For Kubernetes V2, a Spinnaker Account maps to a credential that can authenticate against your Kubernetes Cluster. Unlike with the V1 provider, in V2 the Account does not require any Docker Registry Accounts.
The Kubernetes provider has two requirements:
kubeconfigallows Spinnaker to authenticate against your cluster and to have read/write access to any resources you expect it to manage. You can request this from your Kubernetes cluster administrator.
Spinnaker relies on
kubectlto manage all API access. It’s installed along with Spinnaker.
Spinnaker also relies on
kubectlto access your Kubernetes cluster; only
kubectlfully supports many aspects of the Kubernetes API, such as 3-way merges on
kubectl apply, and API discovery. Though this creates a dependency on a binary, the good news is that any authentication method or API resource that
kubectlsupports is also supported by Spinnaker. This is an improvement over the original Kubernetes provider in Spinnaker.
Kubernetes Role (RBAC)
If you’re using Kubernetes RBAC for access control, here’s the minimal set of permissions Spinnaker needs. The exact set of permissions might differ based on Kubernetes version.
The following YAML creates the correct
ServiceAccount. If you limit
Spinnaker to operating on an explicit list of namespaces (using the
namespaces option), you need
RoleBinding instead of
ClusterRoleBinding, and apply the
RoleBinding to each namespace Spinnaker manages. You can read about the difference between
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: spinnaker-role rules: - apiGroups: [""] resources: ["configmaps", "namespaces", "pods", "secrets", "services"] verbs: ["*"] - apiGroups: [""] resources: ["pods/log"] verbs: ["list", "get"] - apiGroups: ["apps"] resources: ["controllerrevisions", "deployments", "statefulsets"] verbs: ["*"] - apiGroups: ["extensions", "app"] resources: ["daemonsets", "deployments", "ingresses", "networkpolicies", "replicasets"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: spinnaker-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: spinnaker-role subjects: - namespace: default kind: ServiceAccount name: spinnaker-service-account --- apiVersion: v1 kind: ServiceAccount metadata: name: spinnaker-service-account namespace: default
Migrating from the V1 Provider
The V2 provider does not use the Docker Registry Provider, and we encourage you to stop using the Docker Registry accounts in Spinnaker. The V2 provider requires that you manage your private registry configuration and authentication yourself.
There is no automatic pipeline migration from the V1 provider to V2, for a few reasons:
Unlike the V1 provider, the V2 provider encourages you to store your Kubernetes Manifests outside of Spinnaker in some versioned, backing storage, such as Git or GCS.
The V2 provider encourages you to leverage the Kubernetes native deployment orchestration (e.g. Deployments) instead of the Spinnaker blue/green, where possible.
The initial operations available on Kubernetes manifests (e.g. scale, pause rollout, delete) in the V2 provider don’t map nicely to the operations in the V1 provider unless you contort Spinnaker abstractions to match those of Kubernetes. To avoid building dense and brittle mappings between Spinnaker’s logical resources and Kubernetes’s infrastructure resources, we chose to adopt the Kubernetes resources and operations more natively.
However, you can easily migrate your infrastructure into the V2 provider. For any V1 account you have running, you can add a V2 account following the steps below. This will surface your infrastructure twice (once per account) helping your pipeline & operation migration.
Adding an Account
First, make sure that the provider is enabled:
hal config provider kubernetes enable
Then add the account:
hal config provider kubernetes account add my-k8s-v2-account \ --provider-version v2 \ --context $(kubectl config current-context)
You’ll also need to run
hal config features edit --artifacts true
Advanced Account Settings
If you’re looking for more configurability, please see the other options listed in the Halyard Reference.