Frequent commands/bash-snippets
Stash show diff
git stash show -p stash@{0} # full diff
git stash show -u -p stash@{0} # full diff and include 'untracked' files (they didn't appear by default)
git stash show --stat -u stash@{0} # stats and include 'untracked' filesStash save modified+untracked with custom message
git stash save -u "My awesome custom message"Clean repo except files by pattern (dry run)
git clean --force --force -d -X --dry-run -e "\!important_untracked_secrets*.txt"Clean repo except files by pattern (effective)
git clean --force --force -d -X -e "\!important_untracked_secrets*.txt"Deleting the refs to the branches that don't exist on the remote
git remote prune --dry-run origin # check what is going to be deleted
git remote prune originPrune all unreachable objects (in .git directory):
git prune -n # dry run
git pruneDelete only untracked files (but not ignored):
git ls-files . --exclude-standard --others | xargs rm -rfSearch in both tracked/untracked (but not ignored) + show only filenames
git grep --untracked --files-with-matches -F "123456"Forcefully pull tags (from: https://stackoverflow.com/a/58438257)
git fetch --tags --forceSet HEAD if default branch is changed on remote (for example: master -> main)
git remote set-head origin mainPull "divergent" branch ("re-create" way):
git pull --rebase origin developShallow clone/fetch repo (multiple branches)
git clone --depth 1 [email protected]:<repo-owner>/<repo-name>.git # default branch
cd <repo-name>
git log --oneline # one commit here
cat .git/config
git remote set-branches --add origin 'develop' # add new branch to shallow fetch
cat .git/config # is different now
git fetch --depth 1 origin develop:develop
git checkout develop
git log --oneline # again, one commit here
git branch # two branches, develop and default oneList tags from recent to older (plus grep with pattern):
git log --tags --oneline --pretty="%h %d %s" --decorate=full | grep -E 'refs\/tags\/v'git log with SHA/datetime/commit author (an example):
git log --pretty=format:"%h%x09%an%x09%ad%x09%s"List files changed/added/deleted by commit at HEAD:
git diff-tree --no-commit-id --name-only HEAD^1..HEAD -r"git revert" recent commit but only change files, w/o commit:
git show HEAD | git apply -Rshow file at specific commit-sha:
git show abc012yz:./file.txt # > ./file.txt # to rewritefind common commit (common ancestor) of two given branches:
git merge-base master developget full SHA of current commit (at HEAD):
git rev-parse HEADDiff two files with git diff highlighting, but not using git's index:
git diff --no-index file1 file2Diff two directories with git diff highlighting, but not using git's index, filter only files which exists on both directories and having different content (M - modified):
git --no-pager diff --diff-filter=M --name-status --no-index --no-color dir1 dir2List of changed files from diff (untracked not included):
git --no-pager diff --stat --name-onlySave diff into a file and then apply:
git diff branch01..HEAD > example.patch
git checkout somewhereelse
git apply example.patch
# Apply with conficts (three-way merge):
git apply -3 example.patch
rm -f example.patchSimplest:
#!/bin/bashWith options:
#!/bin/bash -xeLink: https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
e option - exit immediately if a pipeline, which may consist of a single simple command, a list, or a compound command returns a non-zero status (there are exceptions: inside if, etc):
set -e
set +e # to disablex option - print a trace of simple commands:
set -xu option - treat unset variables and parameters as error (there are exceptions: array variables, special parameters):
set -uPurpose: fail whole pipeline if any command in a pipeline fails.
Examples from: https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425#set--o-pipefail
The following prints 2 (status of failed 'grep' command):
set -o pipefail
grep -F 'some-pattern' /non/existent/file.txt | sort
echo $? # 2The following prints 0 as status of 'sort' command. stdout of failed 'grep' is empty (stderr isn't empty here - there is error msg) and accepts it and returns with 0
set +o pipefail # disable option (and it is disabled by default)
grep -F 'some-pattern' /non/existent/file.txt | sort
echo $? # 0To enable option (like set -o <OPTION>):
setops xtrace
# the same as: set -xTo disable option (like set +o <OPTION>):
unsetops errexit
# the same as: set +xList modified options:
setoptList all possible options:
emulate -lLR zshCheck exit code and output of a command:
set +e
./command-to-test > command1-output.txt 2>&1
command_failed="${?}"
set -e
if [ "${has_plan_failed}" -ne 0 ]; then
if grep -q -F "The Known Error" command1-output.txt; then
echo "Known issue. Skipping"
else
echo "Unknown issue. Skipping"
exit 1
fi
else
./process-output command1-output.txt
fiLaunch a command if 'yes' is prompted:
./command01 && echo "Continue?" && read -r line && [[ $line == "yes" ]] && ./command02Read-Eval-Print-Loop (REPL) simple example:
while IFS= read -r line; do
if [[ $line == "quit" ]]; then
echo "Bye"
break
else
echo "Hello, $line"
fi
doneList files in zip archive:
unzip -l ~/Downloads/archive.zipUnzip zip archive silently (an example):
unzip -j ~/Downloads/archive.zip dir01 -d . &> /dev/nullCreate zip archive with password from directory:
zip -e -r -q archive.zip directory/.
# then enter passwordGet dirname of filepath
dirname $filepathConcatenate content of all files specified by glob:
awk 'FNR==1{print ""}1' example-dir/*.tfResolve symlinks:
readlink -f /path/to/symlink
readlink -f $(which java)Add ssh key to ssh agent:
ssh-add ~/.ssh/some-private-ssh-key-filenameExporting homebrew's environment variables:
eval "$(/opt/homebrew/bin/brew shellenv)"Add binary to PATH env variable:
export PATH="$PATH:/path/to/binary"
export PATH="/opt/homebrew/opt/example-cli/bin:$PATH"Allowing comments in interactive zsh commands:
setopt interactivecommentsDisable homebrew's auto update:
export HOMEBREW_NO_AUTO_UPDATE=1Set a virtualenv directory:
source ~/.venv/virtualenv-example/bin/activateSet npm token to read private npm registries (Github api token):
export NPM_AUTH_TOKEN=ghp_qwerty123456Compare outputs of two pipelines:
comm -12 <(./command1 | sed 's|command1/specific/||g' | sort) <(./command2 | sed 's|command2/specific/||g' | sort)Sort by n-th column:
cat tab-delimited-file.txt | tr '\t' ',' | sort -t, -nk4Prepend content from one textfile to another:
cat prefix.txt dest.txt > dest.txt.edited
mv dest.txt.edited dest.txtRemove last n-lines from textfile:
# to remove, for example, 12 last lines:
tail -r dest.txt | tail -n +12 | tail -r > dest.txt.edited
mv dest.txt.edited dest.txtCreate/rewrite file with multiple lines:
tee dest.txt > /dev/null <<EOF
line1
line2
line3
EOFAppend multiple lines to file:
tee -a dest.txt > /dev/null <<EOF
line1
line2
line3
EOFChange user owner and group owner of given directory and its sub-directories and files (recursively):
chown -R new-user:new-group /path/to/dirSetting file's attributes allowing to execute:
# user can execute:
chmod u+x script.py
# user, group and others (all) can execute:
chmod a+x script.pyAdding needed shebangs (at file's beginning):
# sed -i '' ...: edit files in-place using FreeBSD implementation of sed (present in macOS)
# bash script example:
sed -i '' '1s|^|#!/bin/bash\n|' script.sh
# python example:
sed -i '' '1s|^|#!/usr/bin/env python\n|' script.py
# ruby example:
sed -i '' '1s|^|#!/usr/bin/env ruby\n|' script.rbCopy STDOUT to clipboard:
cat text-file.txt | pbcopy Paste data from the clipboard to STDOUT:
pbpasteReplace the current contents of the clipboard with a base64 encoded version:
pbpaste | base64 | pbcopyRemove formatting from text in clipboard:
pbpaste | pbcopysed: remove line example:
sed '/limits\.cpu:/d' file.yamlsed: remove line example and save to file (in macOS):
sed -i '' '/limits\.cpu:/d' file.yamlsed: delete specific line if it is found after another specific line:
sed 'N;P;\_limits:\n *cpu: # PLACEHOLDER$_d;D' file.yamlsed: remove one last line from line:
sed -i '' -e '$ d' file.txtsed: replace only first occurence of given pattern (works in macOS):
# to replace first " }" occurence with " } # Hello"
sed -i '' -e '1s/ }/ } # Hello/;t' -e '1,/ }/s// } # Hello/' config-file.hclsed: insert multiple line at n-line:
sed -i '' '20i\
Lorem ipsum dolor sit amet,\
consectetur adipiscing elit\
Nulla ultrices pretium nisi sed maximus\
' file.txtsed: show file content from line N to line X:
# from 50th file to 100th:
sed -n 50,100p file.txtsed+grep: show yaml file content except comments and empty lines:
grep -v -E '^ *#' config.yaml | sed '/^ *$/d'git+sed: find files in index and edit them in-place using pattern:
# sed -i '' ...: edit files in-place using FreeBSD implementation of sed (present in macOS)
git grep --files-with-matches -F "module = planet" | \
xargs sed -i '' -E 's/(planet_name *= *)"jupiter"/\1"mars"/g'Same as previous but using awk instead of '--files-with-matches' flag:
git grep -F "module = planet" | \
awk -F ':' '{print $1}' | \
xargs sed -i '' -E 's|(planet_name *= *)"jupiter"|\1"mars"|g'Or using find's output instead of git grep ...'s output:
find one/awesome/directory/mars -type f -name main.txt | xargs sed -i '' -E 's/(planet_name *= *)"jupiter"/\1"mars"/g'git+cp: recursive copy from one subdir to new subdir:
for textfile in $(git ls-files one/awesome/directory/jupiter); do
newfile=$(echo $textfile | sed 's|/jupiter/|/mars/|g')
mkdir -p "$(dirname $newfile)"
cp -n $textfile $newfile
# to overwrite
# cp $textfile $newfile
doneMap every item in a list:
# for every item, show the value of 'abc' key and remove ',' symbol:
echo '[{"abc":"Hello, World","def":"012123123123"},{"abc":"a,s,d,f","def":"2342432"}]' \
| jq -r '. | map("\(.abc | sub(",";"";"g"))") | .[]'
# result:
# Hello World
# asdfGet current date in format YYYY-mm-dd:
date +%Y-%m-%dGet relative date from current one:
date -v-3d +%Y-%m-%d # 3 days ago
date -v-3m +%Y-%m-%d # 3 month ago
date -v+3d +%Y-%m-%d # 3 days later of current dateGenerate/update .terraform.lock.hcl using terraform:
rm -f .terraform.lock.hcl # remove previosly created lock file if it exists
terraform providers lockGenerate/update .terraform.lock.hcl using terragrunt (terragrunt may run 'init' anyway):
rm -f path/to/tg/config/.terraform.lock.hcl # remove previosly created lock file if it exists
terragrunt --terragrunt-working-dir=path/to/tg/config --terragrunt-no-auto-init providers lockGet terragrunt workdir-dir:
terragrunt terragrunt-info | jq -r '.WorkingDir'Running terragrunt for multipe configs using run-all:
# running for all terragrunt configs under path/to/parent-dir directory:
terragrunt --terragrunt-working-dir=path/to/parent-dir run-all init -upgrade
terragrunt --terragrunt-working-dir=path/to/parent-dir run-all validate
terragrunt --terragrunt-working-dir=path/to/parent-dir run-all plan -input=falseForcefully unlock a state lock:
terraform force-unlock <ID>
# OR using terragrunt:
terragrunt force-unlock <ID> --terragrunt-working-dir=path/to/tg/projectCleanup all .terragrunt-cache dirs from terragrunt monorepo:
find . -type d -name '.terragrunt-cache' | xargs rm -rfInspecting large/sensitive plans (drifts) by jq:
## Initial commands:
terraform init
# terragrunt --terragrunt-working-dir=path/to/tg/config init
terraform validate
# terragrunt --terragrunt-working-dir=path/to/tg/config validate
## Genereate binary plan:
terraform plan -out $(pwd)/tf-plan -input=false
# terragrunt --terragrunt-working-dir=path/to/tg/config plan -out $(pwd)/tf-plan -input=false
## Show plan:
terraform show -no-color $(pwd)/tf-plan | less
# terragrunt --terragrunt-working-dir=path/to/tg/config show -no-color $(pwd)/tf-plan | less
## Getting plan in json from binary plan:
terraform show -json $(pwd)/tf-plan > $(pwd)/tf-plan.json
# terragrunt --terragrunt-working-dir=path/to/tg/config show -json $(pwd)/tf-plan > $(pwd)/tf-plan.json
## How many resources are going to be changed (affected):
jq -r -M '[.resource_changes[] | select(.change.actions == ["update"])] | length' tf-plan.json
## Compare before/after of first affected resource:
diff <(jq -r -M '[.resource_changes[] | select(.change.actions == ["update"])][0].change.before' tf-plan.json) <(jq -r -M '[.resource_changes[] | select(.change.actions == ["update"])][0].change.after' tf-plan.json)
## Compare before/after of first affected resource (sensitive attributes):
diff <(jq -r -M '[.resource_changes[] | select(.change.actions == ["update"])][0].change.before_sensitive' tf-plan.json) <(jq -r -M '[.resource_changes[] | select(.change.actions == ["update"])][0].change.after_sensitive' tf-plan.json)
## List all change actions (with duplicates)
jq -r -M '[.resource_changes[] | .change.actions] | flatten | .[]' tf-plan.jsonCheck if there are missing dependencies
Requires: hcl2json, jq, ruby
for tgfile in $(find . -path './prefix-dir*/**/terragrunt.hcl' -type f | grep -v -F '/.terragrunt-cache/'); do
current_tg_config="${tgfile%/terragrunt.hcl}"
# fix me: to work with 'dependencies' (plural)
current_tg_deps=$(hcl2json $tgfile | jq -r '(if has("dependency") then .dependency|to_entries|map(.value[0].config_path)|join(" ") else "" end)')
if [ ! -z "${current_tg_deps}" ]; then
echo "${current_tg_config} dependencies:"
for dependency in $(echo ${current_tg_deps}); do
echo -n "${dependency}: "
dep_location=$(ruby -e "require 'pathname'; puts(Pathname.new(ARGV[0]).join(ARGV[1]).cleanpath)" "${current_tg_config}" "${dependency}")
if [ ! -f "${dep_location}/terragrunt.hcl" ]; then
echo "NOT OK"
else
echo "OK"
fi
done
echo ""
fi
doneShow dependencies with outputs
In addition to previous snippet's requirements, it also requires awk
for tgfile in $(find . -path './prefix-dir*/**/terragrunt.hcl' -type f | grep -v -F '/.terragrunt-cache/'); do
current_tg_config="${tgfile%/terragrunt.hcl}"
# fix me: to work with 'dependencies' (plural)
current_tg_deps=$(hcl2json $tgfile | jq -r '(if has("dependency") then .dependency|to_entries|map("\(.key):\(.value[0].config_path)")|join(" ") else "" end)')
if [ ! -z "${current_tg_deps}" ]; then
for dependency in $(echo ${current_tg_deps}); do
dependency_key=${dependency%%:*}
dependency_location=${dependency##*:}
dependency_global_location=$(ruby -e "require 'pathname'; puts(Pathname.new(ARGV[0]).join(ARGV[1]).cleanpath)" "${current_tg_config}" "${dependency_location}")
grep -F "dependency.${dependency_key}.outputs" $tgfile | awk -v conf="$current_tg_config" -v dp="$dependency_global_location" '{print conf " depends on: " dp " " $0}'
done
fi
doneShow all terraform resources in a terraform state:
for tfstate_item in $(terraform state list); do terraform state show "$tfstate_item"; done
# within shell command (useful with aws-vault exec)
sh -c "for tfstate_item in \$(terraform state list); do terraform state show \"\$tfstate_item\"; done"
# using terragrunt:
sh -c "for tfstate_item in \$(terragrunt state list); do terragrunt state show \"\$tfstate_item\"; done"Diff manifests example (single file):
cat service.yaml | kubectl diff -f -Diff manifests example (whole directory, recursively):
kubectl diff -R -f example/manifestsPatch resource example (argocd app):
tee patch-file.yaml > /dev/null <<EOF
operation:
initiatedBy:
username: [email protected]
sync:
syncStrategy:
EOF
kubectl patch -n argocd app example-app --patch-file patch-file.yaml --type mergeShow all namespaced k8s resources (but no more 5 per kind) for given namespace (example-ns):
while IFS= read -r resourcekind; do
OUTPUT=$(kubectl get $resourcekind --ignore-not-found --show-kind -n example-ns | head -n 5)
if [[ $(echo $OUTPUT | tr -d '\n' | wc -c) -ne 0 ]]; then
echo "=================== ${resourcekind}:"
echo $OUTPUT
printf "\n"
fi
done < <(kubectl api-resources --verbs=list --namespaced -o name)Get a EKS cluster name from current-context:
kubectl config current-context | awk -F'/' '{print $NF}'Run-off pod:
kubectl run -n default -it --rm ubuntu --image=ubuntu -- bashCopy file from pod:
kubectl cp default/ubuntu:/file.txt ~/Downloads/file.txtGenerate CSV file to inspect External Secrets and their sources (AWS secrets):
(
echo "Namespace,Kind,Name,AWS secret refs"
kubectl get externalsecrets -A -o go-template='{{- range $element := .items -}}
{{printf "%s,%s,%s," $element.metadata.namespace "ExternalSecret" $element.metadata.name }}
{{- range $dataItem := $element.spec.data -}}
{{printf "%s;" $dataItem.remoteRef.key }}
{{- end -}}
{{- range $dataFromItem := $element.spec.dataFrom -}}
{{printf "%s;" $dataFromItem.extract.key }}
{{- end -}}
{{ printf "\n" }}
{{- end -}}' | sort
) | perl -pe 's/([^;]+)(;\1)+/$1/g' > external-secrets.csvFind a string in secrets/pod-specs/configmaps:
kubectl get secrets -A -o json | jq -r '.items | [.[] | .data | map_values(@base64d) ]' | grep -F 'something'
kubectl get pods -A -o json | grep -F 'something'
kubectl get configmaps -A -o json | grep -F 'something'View kustomize rendered manifests:
kubectl kustomize kustomize/example/manifests | lessDiff with kustomize:
kubectl diff -k kustomize/example/manifestsApply kustomize rendered manifests:
kubectl kustomize kustomize/example/manifests | kubectl apply -f -Show rendered template:
helm template helm/example/app -f helm/example/app/values.env1.yaml --set image=$IMAGE_REPOSITORY:$IMAGE_TAG
# to include rendering CRDs if they exists:
helm template helm/example/app -f helm/example/app/values.env1.yaml --set image=$IMAGE_REPOSITORY:$IMAGE_TAG --include-crdsPass rendered template to kubectl diff:
helm template helm/example/app \
--namespace example-ns \
-f helm/example/app/values.env1.yaml \
--set image=$IMAGE_REPOSITORY:$IMAGE_TAG | kubectl diff -n example-ns -f -Upgrade chart example (dry run):
helm upgrade example-app --dry-run=server helm/example/app \
--install --wait --atomic --debug \
--description='Example manual upgrade' \
--timeout 12m0s \
--namespace=example-ns \
-f helm/example/app/values.yaml \
-f helm/example/app/values.env1.yaml \
--set image.repository=$IMAGE_REPOSITORY \
--set image.tag=$IMAGE_TAGFind all versions of particular helm chart (for example argo-cd/argocd-image-updater):
helm search repo argo-cd/argocd-image-updater --versionsList versions of a chart stored in a OCI-based registry:
# for the chart: oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
skopeo list-tags docker://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controllerInspect image tag in a registry:
skopeo --override-arch=arm64 --override-variant=linux --override-os=linux inspect docker://ghcr.io/actions/actions-runner:2.328.0Allow binary to be executed:
xattr -d com.apple.quarantine binary-fileDisable annoying back-forward navigation in Google Chrome by a touchpad gesture
(from https://apple.stackexchange.com/a/80163):
defaults write com.google.Chrome AppleEnableSwipeNavigateWithScrolls -bool FALSEManaging /etc/hosts
sudo cp /etc/hosts ~/hosts_backup
sudo vim /etc/hosts
dscacheutil -flushcacheA Spotlight fix: disable/enable indexing:
sudo mdutil -Eai off
sudo mdutil -Eai on
mdutil -asMake Dock auto-hide faster: on
defaults write com.apple.dock autohide-delay -int 0
defaults write com.apple.dock autohide-time-modifier -float 0.4
killall DockMake Dock auto-hide faster: off
defaults delete com.apple.dock autohide-delay
defaults delete com.apple.dock autohide-time-modifier
killall DockDisk usage in $HOME directory (suppressing Operation not permitted... messages)
du -hd1 2>/dev/null | sort -hLaunch a kafka consumer (inside a docker container) working with remote kafka brokers
Launch a docker container with all needed kafka dependencies:
docker run --rm --name kafka-consumer -i -t --entrypoint=/bin/sh apache/kafka:3.7.0 -iLaunch another terminal and copy certificate from brokers to just launched container
docker cp certificate.pem kafka-consumer:/ca.pemGo back to previous terminal (shell session inside the kafka-consumer container) and import copied certificate to the keytool
# in container:
keytool -import -file ca.pem -alias CA -keystore ${HOME}/client.truststore.jks -noprompt -storepass 123456Then launch consumer:
# in container:
/opt/kafka/bin/kafka-console-consumer.sh --formatter "org.apache.kafka.connect.mirror.formatters.OffsetSyncFormatter" \
--bootstrap-server kafka-bootstrap-server.example.com:12345 \
--from-beginning --topic mm2-offset-syncs.destination.internal \
--consumer-property="sasl.mechanism=SCRAM-SHA-256" \
--consumer-property="security.protocol=SASL_SSL" \
--consumer-property="sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=\"readonly-user\" password=\"<KAFKA_TOKEN>\";" \
--consumer-property="ssl.truststore.location=${HOME}/client.truststore.jks" \
--consumer-property="ssl.truststore.password=123456" | awk -F', ' '{print $1}' >> ${HOME}/result.txtAnother example:
# in container:
/opt/kafka/bin/kafka-console-consumer.sh --formatter "org.apache.kafka.connect.mirror.formatters.CheckpointFormatter" \
--bootstrap-server kafka-bootstrap-server.example.com:12345 \
--from-beginning --topic source.checkpoints.internal \
--consumer-property="sasl.mechanism=SCRAM-SHA-256" \
--consumer-property="security.protocol=SASL_SSL" \
--consumer-property="ssl.truststore.type=jks" \
--consumer-property="ssl.endpoint.identification.algorithm=" \
--consumer-property="sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=\"readonly-user\" password=\<KAFKA_TOKEN>\";" \
--consumer-property="ssl.truststore.location=${HOME}/client.truststore.jks" \
--consumer-property="ssl.truststore.password=123456"Encrypt file with password:
gpg -c file.zip
# type passwordDecrypt encrypted file back:
gpg -d file.zip.gpg > decrypted.zipGet fingerprint of SSH key file:
ssh-keygen -lf ssh-key-file
ssh-keygen -E md5 -lf ssh-key-fileGet MD5 checksum of file/pipe:
md5 file.txt
grep -F "example" file.txt | md5Encode string to base64:
echo -n "simple_string123" | base64 # removing newline by '-n' flag, because newline affects outputDecode from base64:
echo "SGVsbG8sIFdvcmxkIQo=" | base64 --decode # newline is ignored here, so '-n' flag is not requiredCheck/inspect dns records:
dig example.com CNAME
dig example.com
nslookup example.comCheck if TCP port is open at IP/domain:
nc -z -v example.com 80
nc -z -v 1.2.3.4 8080Traceroute examples:
traceroute 1.1.1.1
traceroute 8.8.8.8Check URL:
curl -s http://example.com
curl -v http://example.comDownload file from URL silently:
curl -s -S -L -o \
/path/to/archive.zip \
https://github.com/<REPO-OWNER>/<REPO>/releases/download/v<VERSION>/<ARCHIVE>.zipGet info of the user of an api token:
curl -L \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/userGet URL of current Github Actions job (works for parallel jobs):
# The job using this GitHub Actions must have the actions:read permission
curl --get -Ss \
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"${GITHUB_API_URL}/repos/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}/jobs" | \
jq -r -M '.jobs | .[] | select(.name | contains("job-name")) | select(.name | contains("(job-first-parallel-key")) | .html_url'Search in all non-archived repos in the given org and list the matched repos' names only:
REPO_OWNER=example-org
TO_SEARCH="key: value"
SEARCH_STRING="org:${REPO_OWNER}"
SEARCH_STRING="${SEARCH_STRING} \"${TO_SEARCH}\""
SEARCH_STRING="${SEARCH_STRING} NOT is:archived"
SEARCH_QUERY="$(echo -n $SEARCH_STRING | sed 's| |SPACE_SYMBOL|g' | sed 's|"|QUOTE_SYMBOL|g' | jq -sRr @uri | sed 's|SPACE_SYMBOL|+|g' | sed 's|QUOTE_SYMBOL|"|g')"
curl -L -s \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"https://api.github.com/search/code?q=$SEARCH_QUERY&per_page=100" | \
jq -r '.items[] | .repository.name' | sort | uniq
# to filter by path using regex (in .example/hidden/dir):
# curl ... | jq -r '.items[] | select(.path|test("^\\.example\\/hidden\\/dir\\/")) | .repository.name' | sort | uniqWorkflow run ids (loop of paginated requests) for 3 recent months:
START_DATE=$(date -v-3m +%Y-%m-%d)
END_DATE=$(date +%Y-%m-%d)
WORKFLOW_REPO_OWNER=example-org
WORKFLOW_REPO_NAME=my-awesome-repo
WORKFLOW_RUN_DATA_PAGE=1
WORKFLOW_RUN_COUNTS=1
while [ $WORKFLOW_RUN_COUNTS -gt 0 ]; do
WORKFLOW_RUN_IDS_PAGE=$(curl -L -s \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/$WORKFLOW_REPO_OWNER/$WORKFLOW_REPO_NAME/actions/runs?created=$START_DATE..$END_DATE&per_page=50&page=$WORKFLOW_RUN_DATA_PAGE" | jq -r '[.workflow_runs[] | .id]')
# printing workflow ids:
echo $WORKFLOW_RUN_IDS_PAGE | jq -r '.[]'
WORKFLOW_RUN_COUNTS=$(echo $WORKFLOW_RUN_IDS_PAGE | jq -r '. | length')
WORKFLOW_RUN_DATA_PAGE=$(( $WORKFLOW_RUN_DATA_PAGE + 1 ))
sleep 2
doneList all flatten steps of given workflow run:
WORKFLOW_REPO_OWNER=example-org
WORKFLOW_REPO_NAME=my-awesome-repo
WORKFLOW_RUN_ID=12345678910
curl -L -s \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/$WORKFLOW_REPO_OWNER/$WORKFLOW_REPO_NAME/actions/runs/$WORKFLOW_RUN_ID/jobs" \
| jq -r '[.jobs[] | {workflow_name:.workflow_name,name:.name,step:.steps[]}]'List unarchived repos of given team in Github org:
ORG_NAME=example-org
TEAM_NAME=devops-team
curl -L \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
-s \
"https://api.github.com/orgs/$ORG_NAME/teams/$TEAM_NAME/repos" | jq -r '.[] | select(.archived == false) | .name'List all unarchived repos in given Github org (loop of paginated requests):
ORG_NAME=example-org
REPOS_PAGE=1
REPOS_COUNT_IN_PAGE=1
while [ $REPOS_COUNT_IN_PAGE -gt 0 ]; do
REPOS_IN_PAGE=$(curl -L \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
-s \
"https://api.github.com/orgs/$ORG_NAME/repos?sort=pushed&direction=desc&per_page=100&page=$REPOS_PAGE" | jq -r '. | map({name, archived})')
REPOS_COUNT_IN_PAGE=$(echo $REPOS_IN_PAGE | jq -r '. | length')
REPOS_PAGE=$(( $REPOS_PAGE + 1 ))
echo $REPOS_IN_PAGE | jq -r '.[] | select(.archived == false) | .name'
sleep 1
doneRemove all Github-hosted repo's artifacts which updated before a given date:
ORG_NAME=example-org
REPO_NAME=example-repo
ARTIFACT_NAME=artifact
ARTIFACTS_PAGE=1
ARTIFACTS_COUNT_IN_PAGE=1
ARTIFACT_IDS_TO_DELETE=""
TODAY_ISO8601=$(jq fromdateiso8601 <<<"\"$(date +%Y-%m-%d)T00:00:00Z\"")
while [ $ARTIFACTS_COUNT_IN_PAGE -gt 0 ]; do
ARTIFACTS_IN_PAGE=$(curl -s -L \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"https://api.github.com/repos/$ORG_NAME/$REPO_NAME/actions/artifacts?per_page=100&name=$ARTIFACT_NAME&page=$ARTIFACTS_PAGE" | jq -r '.artifacts')
ARTIFACTS_COUNT_IN_PAGE=$(echo $ARTIFACTS_IN_PAGE | jq -r '. | length')
ARTIFACTS_PAGE=$(( $ARTIFACTS_PAGE + 1 ))
ARTIFACT_IDS_TO_DELETE=$(cat <(echo "$ARTIFACT_IDS_TO_DELETE") <(echo $ARTIFACTS_IN_PAGE | jq -r ".[] | select(.expired == false) | select ( .updated_at | fromdateiso8601 < $TODAY_ISO8601) | .id"))
sleep 1
done
for artifact_id in $(echo $ARTIFACT_IDS_TO_DELETE); do
echo "Removing artifact id: $artifact_id"
curl -s -L \
-X DELETE \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"https://api.github.com/repos/$ORG_NAME/$REPO_NAME/actions/artifacts/$artifact_id"
sleep .5
doneList teams which has access to given repo:
ORG_NAME=example-org
REPO_NAME=my-awesome-repo
curl -L -s \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${PAT_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"https://api.github.com/repos/$ORG_NAME/$REPO_NAME/teams"Rotate/update aws-vault subshell's creds from its own:
# assuming it's in 'aws-vault exec' subshell (AWS_VAULT env variable is set):
eval $(AWS_VAULT= aws-vault export --format=export-env $AWS_VAULT)List profiles:
aws-vault listCopy sign-in URL to AWS console to clipboard:
aws-vault login -s <CHOSEN_AWS_VAULT_PROFILE> | pbcopyExample of running aws-cli command with aws-vault:
aws-vault exec <CHOSEN_AWS_VAULT_PROFILE> -- aws sts get-caller-identityDownload k8s config from EKS cluster:
aws eks update-kubeconfig --name example-cluster --kubeconfig ~/path/to/store/kubeconfigLogin to AWS ECR registry:
AWS_REGION=us-east-1
REGISTRY_HOST=123456789012.dkr.ecr.$AWS_REGION.amazonaws.com
# DOCKER_HOST can be unspecified, because it's client-side
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $REGISTRY_HOST
# Using aws-vault:
aws-vault exec <CHOSEN_AWS_VAULT_PROFILE> -- aws ecr get-login-password --region $AWS_REGION \
| docker login --username AWS --password-stdin $REGISTRY_HOSTCheck logs of a service managed by systemd and follow:
journalctl -f -u myawesomeservicename.serviceCheck 100 recent lines of logs of a service managed by systemd:
journalctl -u myawesomeservicename.service -l -n 100 --no-pagerStart and stop a systemd managed service:
systemctl start myawesomeservicename
systemctl stop myawesomeservicenameRestart (stop & start) a systemd managed service:
systemctl restart myawesomeservicenameEnable/disable a systemd managed service, so it's will be launched at boot (or not):
systemctl enable myawesomeservicename
systemctl disable myawesomeservicenameStatus of a systemd managed service (shows running or not, enabled or disabled and recent logs):
systemctl status myawesomeservicename
# without pager:
systemctl status myawesomeservicename -l --no-pagerGet current architecture (arm64 or amd64, etc):
dpkg --print-architectureShow the cache directory:
uv cache dirClear the cache:
uv cache cleanClean cache:
brew cleanupUpdate formulas (keep Homebrew up to date):
brew updateUpgrade a package:
brew upgrade --dry-run PACKAGE_NAME # to check what will happen before the upgrade
brew upgrade PACKAGE_NAMEList installed formulae that are not dependencies of another installed formula or cask:
brew leavesList all files in a given package:
brew ls PACKAGE_NAME
# list all package's executables - they are usually located in bin/ sub-directory:
brew ls PACKAGE_NAME | grep -F '/bin/' List dependencies of a given package (upstream dependencies):
brew deps --installed PACKAGE_NAMEList downstream dependencies of a given package (dependents):
brew uses --recursive --installed PACKAGE_NAMEList all currently tapped (third-party) repositories:
brew tapGenerate random string:
ruby -e "puts [*('a'..'z'),*('0'..'9')].shuffle[0,8].join" # 8 chars
ruby -e "puts [*('a'..'z'),*('0'..'9')].shuffle[0,16].join" # 16 charsLoading standard module (yaml):
ruby -r yaml -e "f='path/to/file.yaml'; parsed_yaml=YAML.load_file(f); pp parsed_yaml"Lists all service integration endpoints available in a given project:
avn --auth-token $AVN_AUTH_TOKEN service integration-endpoint-list --project $PROJECT_NAME
# Raw json output (for every endpoint):
avn --auth-token $AVN_AUTH_TOKEN service integration-endpoint-list --project $PROJECT_NAME --json List instances:
limactl listLaunch a docker instance:
limactl start --name=docker-runtime --tty=false --disk=64 template://docker
# VZ+Rosetta:
limactl start --vm-type=vz --rosetta --name=docker-runtime --tty=false --disk=64 template://dockerUse a launched instance:
DOCKER_HOST=unix://$HOME/.lima/docker-runtime/sock/docker.sock docker psStopping an instance:
limactl stop docker-runtimeDeleting an instance:
limactl rm docker-runtimeRender effective (merged) docker-compose config:
docker-compose -f docker-compose.yml -f docker-compose-01.yml configValidate digger.yml and show generated projects:
dgctl validate