I hereby claim:
- I am quater on github.
- I am quater (https://keybase.io/quater) on keybase.
- I have a public key ASDvzi_IC-63lA48V9RXzVAigRkzpAPOPP9I3oB0Q7boEwo
To claim this, I am signing this object:
| vagrant@ubuntu-bionic:~/shared_folder/scripts$ ./lxd-attach-calico.sh lxd2 backend callxd1 | |
| 2018-12-11 16:12:33.244 [DEBUG][26444] plugin.go 63: /var/lib/calico/nodename exists | |
| 2018-12-11 16:12:33.244 [DEBUG][26444] utils.go 59: Read node name from file: ubuntu-bionic | |
| 2018-12-11 16:12:33.244 [DEBUG][26444] utils.go 69: Using node name ubuntu-bionic | |
| 2018-12-11 16:12:33.244 [DEBUG][26444] utils.go 425: Getting WEP identifiers with arguments: IgnoreUnknown=1, for node ubuntu-bionic | |
| 2018-12-11 16:12:33.244 [DEBUG][26444] utils.go 426: Loaded k8s arguments: {{true} <nil> } | |
| 2018-12-11 16:12:33.244 [INFO][26444] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"default", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"ubuntu-bionic", Orchestrator:"cni", Endpoint:"callxd1", Workload:"", Pod:"", ContainerID:"cnitool-f21388e577474665726b"}} | |
| 2018-12-11 16:12:33.245 [DEBUG][26444] load.go 70: Loading config from environment | |
| 2018-12-11 16:12:33.245 [DEBUG][26 |
| { | |
| "builders": [ | |
| { | |
| "type": "lxd", | |
| "image": "ubuntu-daily:bionic" | |
| } | |
| ], | |
| "provisioners": [ | |
| { | |
| "type": "file", |
| PACKER_LOG=1 packer build templates/blabla-lxd-amd64.json | |
| 2018/09/12 15:21:48 [INFO] Packer version: 1.3.0 | |
| 2018/09/12 15:21:48 Packer Target OS/Arch: linux amd64 | |
| 2018/09/12 15:21:48 Built with Go Version: go1.11 | |
| 2018/09/12 15:21:48 Detected home directory from env var: /home/userbla | |
| 2018/09/12 15:21:48 Using internal plugin for hyperv-iso | |
| 2018/09/12 15:21:48 Using internal plugin for ncloud | |
| 2018/09/12 15:21:48 Using internal plugin for oracle-classic | |
| 2018/09/12 15:21:48 Using internal plugin for profitbricks | |
| 2018/09/12 15:21:48 Using internal plugin for scaleway |
| # Command | |
| $ kops rolling-update cluster shine.dev.example.org --state s3://example-org-cluster-config --v 10 | |
| # Output | |
| I0903 15:12:55.477031 15573 factory.go:68] state store s3://example-org-cluster-config | |
| I0903 15:12:55.930741 15573 s3context.go:198] Checking default bucket encryption "example-org-cluster-config" | |
| I0903 15:12:55.930762 15573 s3context.go:203] Calling S3 GetBucketEncryption Bucket="example-org-cluster-config" | |
| I0903 15:12:56.321236 15573 s3context.go:210] Unable to read bucket encryption policy: will encrypt using AES256 | |
| I0903 15:12:56.321253 15573 s3context.go:182] Found bucket "example-org-cluster-config" in region "us-east-1" with default encryption set to false | |
| I0903 15:12:56.321269 15573 s3fs.go:219] Reading file "s3://example-org-cluster-config/shine.dev.example.org/config" |
| I0612 13:03:05.153700 12810 s3context.go:198] Checking default bucket encryption "the-bucket-name" | |
| I0612 13:03:05.153766 12810 s3context.go:203] Calling S3 GetBucketEncryption Bucket="the-bucket-name" | |
| W0612 13:03:05.539259 12810 s3context.go:210] Unable to read bucket encryption policy: will encrypt using AES256 | |
| I0612 13:03:05.539273 12810 s3context.go:182] Found bucket "the-bucket-name" in region "us-east-1" with default encryption set to false | |
| I0612 13:03:05.539301 12810 s3fs.go:216] Reading file "s3://the-bucket-name/our.cluster.name.io/config" | |
| I0612 13:03:05.697213 12810 s3fs.go:253] Listing objects in S3 bucket "the-bucket-name" with prefix "our.cluster.name.io/instancegroup/" | |
| I0612 13:03:05.804814 12810 s3fs.go:281] Listed files in s3://the-bucket-name/our.cluster.name.io/instancegroup: [s3://the-bucket-name/our.cluster.name.io/instancegroup/master-us-east-1a s3://the-bucket-name/our.cluster.name.io/instancegroup/nodes] | |
| I0612 13:03:05.804856 12810 s3fs.go:216] Reading file "s3://the-buck |
I hereby claim:
To claim this, I am signing this object:
| # Pull Request: https://github.com/hashicorp/terraform/pull/7319 | |
| # Test Case: AWS VPC Endpoint with Terraform | |
| # This was successfully tested when master branch was at de0a34fc3517893a5078f6358ca9523cd4c63490 | |
| # Steps: | |
| # 1. Alter below code by replacing MY_SSH_KEY_NAME, MY_BUCKET_NAME, My_ACCESS_KEY and My_SECRET_KEY with your values. | |
| # 2. Run `terraform apply` | |
| # 3. SSH to EC2 instance | |
| # 4. Create new file i.e. `touch /home/ec2-user/testfile.txt` |
| $ vagrant up swarm-node-1 --debug | |
| INFO global: Vagrant version: 1.8.1 | |
| INFO global: Ruby version: 2.2.3 | |
| INFO global: RubyGems version: 2.4.5.1 | |
| INFO global: VAGRANT_OLD_ENV_XDG_SESSION_DESKTOP="gnome" | |
| INFO global: VAGRANT_INSTALLER_VERSION="2" | |
| INFO global: VAGRANT_OLD_ENV_LC_NUMERIC="en_GB.UTF-8" | |
| INFO global: VAGRANT_OLD_ENV_GNOME_KEYRING_PID="" | |
| INFO global: VAGRANT_INSTALLER_EMBEDDED_DIR="/opt/vagrant/embedded" | |
| INFO global: VAGRANT_OLD_ENV_GEM_PATH="/home/bogus-user/.chefdk/gem/ruby/2.1.0:/opt/chefdk/embedded/lib/ruby/gems/2.1.0" |
| #!/bin/bash | |
| # Add Vagrant's hostupdater commands to sudoers, for `vagrant up` without a password | |
| # force sudo on self. | |
| if [ $( id -u ) -ne 0 ]; then | |
| exec sudo -p "Login password for %p: " "$0" "$@" | |
| exit $? | |
| fi | |
| # Stage updated sudoers in a temporary file for syntax checking |