Skip to content

Instantly share code, notes, and snippets.

@jmelchio
Last active February 10, 2019 17:29
Show Gist options
  • Save jmelchio/bce9528a8f8cd2bad2cbfe235e3a6718 to your computer and use it in GitHub Desktop.
Save jmelchio/bce9528a8f8cd2bad2cbfe235e3a6718 to your computer and use it in GitHub Desktop.
Quick start configuration for Spinnaker with CloudFoundry support on k8s
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: spinnaker
labels:
service: elasticsearch
spec:
serviceName: es
# NOTE: This is number of nodes that we want to run
# you may update this
replicas: 2
selector:
matchLabels:
service: elasticsearch
template:
metadata:
labels:
service: elasticsearch
spec:
terminationGracePeriodSeconds: 300
initContainers:
# NOTE:
# This is to fix the permission on the volume
# By default elasticsearch container is not run as
# non root user.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_notes_for_production_use_and_defaults
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
# NOTE:
# To increase the default vm.max_map_count to 262144
# https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-prod-mode
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
# To increase the ulimit
# https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_notes_for_production_use_and_defaults
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: tcp
# NOTE: you can increase this resources
resources:
requests:
memory: 5Gi
limits:
memory: 7Gi
env:
# NOTE: the cluster name; update this
- name: cluster.name
value: elasticsearch-cluster
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
# NOTE: This will tell the elasticsearch node where to connect to other nodes to form a cluster
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.es.default.svc.cluster.local,elasticsearch-1.es.default.svc.cluster.local"
# NOTE: You can increase the heap size
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
namespace: spinnaker
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
# NOTE: You can increase the storage size
resources:
requests:
storage: 10Gi
*** Disclaimer: Config and instructions supplied 'as is'. No suitability for use implied and by using these instructions
users assume responsibility for any outcomes of the use of these configuration files and instructions ***
Acknowledgements:
* kube-spinnaker-demo.yml has been derived from quickstart instructions and samples from the spinnaker.io site.
* service.yml, storage.yml and elasticsearch.yml have been copied (with minor modifications) from the elasticsearch instructions from the spinnaker.io site. (https://www.spinnaker.io/guides/user/tagging/)
Quick instructions for getting an environment up and running with support for k8s, CloudFoundry and entity tagging. Setup is
based on the quickstart file provided on the spinnaker.io website.
Familiarity with Kubernetes is assumed. This was tested out on GKE.
Cluster size used for full environment with elasticsearch is 8 vCPU and 53 GB of memory in total. We used 4 vms to create
this cluster.
For entity tagging elasticsearch needs to be set up. If this is not desired find the flags in the kube-spinnaker-demo.yml
file related to entity tagging and disable them. Also remove the reference to it in clouddriver-local.yml config map item.
Otherwise don't change related items and execute kubectl apply -f on storage.yml, service.yml and elasticsearch.yml in
that order to get elasticsearch set up.
In the kube-spinnaker-demo.yml file replace account and key values with appropriate values for environment. Look for places
where they look like '[password]' and so on.
When all values are set execute kubectl apply -f kube-spinnaker-demo.yml and everything should come up as expected.
Expose the spin-deck service either through proxy or attach it to a load balancer to start using Spinnaker.
apiVersion: v1
kind: Namespace
metadata:
name: spinnaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: spinnaker-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: spinnaker
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: halyard-pv-claim
namespace: spinnaker
labels:
app: halyard-storage-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: spin-halyard
namespace: spinnaker
labels:
app: spin
stack: halyard
spec:
replicas: 1
selector:
matchLabels:
app: spin
stack: halyard
template:
metadata:
labels:
app: spin
stack: halyard
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: halyard-daemon
# todo - make :stable or digest of :stable
image: gcr.io/spinnaker-marketplace/halyard:stable
imagePullPolicy: Always
command:
- /bin/sh
args:
- -c
# We persist the files on a PersistentVolume. To have sane defaults,
# we initialise those files from a ConfigMap if they don't already exist.
- "test -f /home/spinnaker/.hal/config || cp -R /home/spinnaker/staging/.hal/. /home/spinnaker/.hal/ && /opt/halyard/bin/halyard"
readinessProbe:
exec:
command:
- wget
- -q
- --spider
- http://localhost:8064/health
ports:
- containerPort: 8064
volumeMounts:
- name: persistentconfig
mountPath: /home/spinnaker/.hal
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/config
subPath: config
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/deck.yml
subPath: deck.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/gate.yml
subPath: gate.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/igor.yml
subPath: igor.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/fiat.yml
subPath: fiat.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/front50-local.yml
subPath: front50-local.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/clouddriver-local.yml
subPath: clouddriver-local.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/settings-local.js
subPath: settings-local.js
- name: halconfig
mountPath: /home/spinnaker/staging/.gcp/gce-account.json
subPath: gce-account.json
volumes:
- name: halconfig
configMap:
name: halconfig
- name: persistentconfig
persistentVolumeClaim:
claimName: halyard-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: spin-halyard
namespace: spinnaker
spec:
ports:
- port: 8064
targetPort: 8064
protocol: TCP
selector:
app: spin
stack: halyard
---
apiVersion: v1
kind: ConfigMap
metadata:
name: halconfig
namespace: spinnaker
data:
igor.yml: |
enabled: true
skipLifeCycleManagement: false
fiat.yml: |
enabled: false
skipLifeCycleManagement: true
front50-local.yml: |
spinnaker.s3.versioning: false
clouddriver-local.yml: |
elasticsearch:
activeIndex: spinnaker
connection: http://es.spinnaker.svc.cluster.local:9200
cloudfoundry:
enabled: true
accounts:
- name: '[name]'
user: '[user_name]'
password: '[password]'
api: '[api-uri]'
settings-local.js: |
window.spinnakerSettings.providers.cloudfoundry = {defaults: {account: 'my-cloudfoundry-account'}};
window.spinnakerSettings.feature.entityTags = true;
gce-account.json: |
{
"type": "service_account",
"project_id": "[project-name]",
"private_key_id": "[key_id]",
"private_key": "[key]",
"client_email": "[client_email]",
"client_id": "[client_id]",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "[cert_url]"
}
gate.yml: |
host: 0.0.0.0
deck.yml: |
host: 0.0.0.0
env:
API_HOST: http://spin-gate.spinnaker:8084/
config: |
currentDeployment: default
deploymentConfigurations:
- name: default
version: master-latest-unvalidated
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-west-2
defaults:
iamRole: BaseIAMRole
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: false
accounts: []
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: kubernetes
requiredGroupMembership: []
providerVersion: V2
dockerRegistries: []
configureImagePullSecrets: true
serviceAccount: true
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
oauthScopes: []
oAuthScopes: []
primaryAccount: kubernetes
openstack:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
oraclebmcs:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: Distributed
accountName: kubernetes
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
gitConfig:
upstreamUser: spinnaker
persistentStorage:
persistentStoreType: s3
azs: {}
gcs:
rootFolder: front50
redis: {}
s3:
bucket: spinnaker-artifacts
rootFolder: front50
endpoint: http://minio-service.spinnaker:9000
accessKeyId: 'dont-use-this'
secretAccessKey: 'for-production'
oraclebmcs: {}
features:
auth: false
fiat: false
chaos: false
entityTags: true
jobs: true
artifacts: true
metricStores:
datadog:
enabled: false
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
period: 30
enabled: false
notifications:
slack:
enabled: false
timezone: America/Toronto
ci:
jenkins:
enabled: true
masters:
- name: my-jenkins-master
address: '[localiton_uri]'
username: '[user_name]'
password: '[password]'
travis:
enabled: false
masters: []
security:
apiSecurity:
ssl:
enabled: false
overrideBaseUrl: /gate
uiSecurity:
ssl:
enabled: false
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
ldap:
enabled: false
x509:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
enabled: false
artifacts:
gcs:
enabled: true
accounts:
- name: gcs-artifact-account
jsonPath: /home/spinnaker/staging/.gcp/gce-account.json
github:
enabled: true
accounts: []
http:
enabled: true
accounts:
- name: jenkins
username: '[user_name]'
password: '[password]'
- name: public
pubsub:
google:
enabled: true
subscriptions:
- name: gcs-subscription
project: [project-name]
subscriptionName: [subscription-name]
jsonPath: /home/spinnaker/staging/.gcp/gce-account.json
ackDeadlineSeconds: 10
messageFormat: GCS
canary:
enabled: true
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: aws
enabled: true
accounts:
- name: kayenta-minio
bucket: spinnaker-artifacts
rootFolder: kayenta
endpoint: http://minio-service.spinnaker:9000
accessKeyId: 'dont-use-this'
secretAccessKey: 'for-production'
supportedTypes:
- CONFIGURATION_STORE
- OBJECT_STORE
s3Enabled: true
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
namespace: spinnaker
labels:
app: minio-storage-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
namespace: spinnaker
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: minio
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: minio-pv-claim
containers:
- name: minio
image: minio/minio
args:
- server
- /storage
env:
- name: MINIO_ACCESS_KEY
value: 'dont-use-this'
- name: MINIO_SECRET_KEY
value: 'for-production'
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: /storage
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
namespace: spinnaker
spec:
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
---
apiVersion: batch/v1
kind: Job
metadata:
name: hal-deploy-apply
namespace: spinnaker
labels:
app: job
stack: hal-deploy
spec:
template:
metadata:
labels:
app: job
stack: hal-deploy
spec:
restartPolicy: OnFailure
containers:
- name: hal-deploy-apply
# todo use a custom image
image: gcr.io/spinnaker-marketplace/halyard:stable
command:
- /bin/sh
args:
- -c
- "hal deploy apply --daemon-endpoint http://spin-halyard.spinnaker:8064"
apiVersion: v1
kind: Service
metadata:
name: es
namespace: spinnaker
labels:
service: elasticsearch
spec:
ports:
- port: 9200
name: serving
- port: 9300
name: node-to-node
selector:
service: elasticsearch
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd
namespace: spinnaker
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zone: us-central1-a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment