Parse the kubernetes manifest in yaml or json, don't care a manifest type.
Examples:
package main
import (
"bytes"
"context"
# The following is an example of a command to clone an development(ex-app-development) as an production(ex-app-production). | |
kubectl get deployment ex-app-development -o json \ | |
| jq '.metadata.name = "ex-app-production"' \ | |
| kubectl apply -f - |
// Turn on gRPC for domain using | |
// https://developers.cloudflare.com/support/network/understanding-cloudflare-grpc-support/#enable-grpc, then CF | |
// rewrites grpc requests to grpc-web (like a reverse envoy filter, see https://blog.cloudflare.com/road-to-grpc/#converting-to-http-1-1) | |
// which can be handled by workers. Does not work with workers.dev. | |
export default { | |
async fetch(request, env, context) { | |
// Use a stream so CF doesn't add content-lenght | |
// which would prevent grpc-web -> grpc conversion. | |
const { readable, writable } = new TransformStream(); | |
let writer = writable.getWriter(); |
#!/bin/bash -e | |
# How to use this script: | |
# 1. Follow these instructions to configure a single AWS account to do initial login with SSO | |
# https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html | |
# 2. Export AWS_PROFILE=... and then run "aws sso login" to get an SSO token | |
# 3. Once signed in with AWS SSO, run this script to automatically list out all the other accounts and roles and add them to your config file | |
# If you want to filter roles / accounts in the process, or validate config before committing it, you can customise the script to do this. |
Parse the kubernetes manifest in yaml or json, don't care a manifest type.
Examples:
package main
import (
"bytes"
"context"
The instructions in this tutorial are not WSL2 specific, but if you would like to see how to get started with kind in WSL2 (and afterwards follow along with this tutorial), do have a look at this video.
In order to create a multi-node cluster with kind, we need to create a configuration file. Let's say we want to create a 3 node cluster, 1 control-plane and 2 workers, we would then create the following configuration file (you can save it as kind-config.yaml
):
kind: Cluster
#!/usr/bin/ruby | |
require 'json' | |
require 'net/http' | |
require 'shellwords' | |
require 'time' | |
require 'uri' | |
require 'yaml' | |
@target_sidecar_image = YAML.load(YAML.load(`kubectl --namespace=istio-system get configmap istio-sidecar-injector -o yaml`)['data']['config'])['template'].match(/.*(eu.gcr.io\/at-artefacts\/platform-istio-proxy.*)".*/)[1] |
package main | |
import ( | |
"log" | |
"os/exec" | |
) | |
func main() { | |
path, err := exec.LookPath("ls") | |
if err != nil { |
We Gophers, love table-driven-tests, it makes our unittesting structured, and makes it easy to add different test cases with ease.
Let’s create our table driven test, for convenience, I chose to use t.Log
as the test function.
Notice that we don't have any assertion in this test, it is not needed to for the demonstration.
func TestTLog(t *testing.T) {
t.Parallel()
diskutil list
to figure out which drive is the usb, on macbook pro with 1 hardrive, the usb is /dev/disk2
diskutil unmountDisk /dev/disk2
or use Mac's Disk Utility (just umount, don't eject, umount removes it from directory structure and eject disconncet it altogether)dd
(a low level cp
) to write iso content into the usb drive, sudo dd if=~/Downloads/Fedora-Live-Desktop-x86_64-20-1.iso of=/dev/disk2 bs=1m
, this will take a bit of time, make sure you wait until it's done, additionally compare the size or checksum to make sure all has been copied (not that necessary since if it weren't copied, it'll err at boot time)ssh [email protected] "bash -s -x" -- <ixgbevf-upgrade.sh |