# Source: https://gist.github.com/2f6ef41745fad5bf0d7c023c1261d77c | |
########################################################################### | |
# How To Shift Left Infrastructure Management Using Crossplane Composites # | |
# https://youtu.be/AtbS1u2j7po # | |
########################################################################### | |
# Referenced videos: | |
# - Crossplane - GitOps-based Infrastructure as Code through Kubernetes API: https://youtu.be/n8KjVmuHm7A | |
# - How to apply GitOps to everything - combining Argo CD and Crossplane: https://youtu.be/yrj4lmScKHQ |
Just documenting docs, articles, and discussion related to gRPC and load balancing.
https://github.com/grpc/grpc/blob/master/doc/load-balancing.md
Seems gRPC prefers thin client-side load balancing where a client gets a list of connected clients and a load balancing policy from a "load balancer" and then performs client-side load balancing based on the information. However, this could be useful for traditional load banaling approaches in clound deployments.
https://groups.google.com/forum/#!topic/grpc-io/8s7UHY_Q1po
gRPC "works" in AWS. That is, you can run gRPC services on EC2 nodes and have them connect to other nodes, and everything is fine. If you are using AWS for easy access to hardware then all is fine. What doesn't work is ELB (aka CLB), and ALBs. Neither of these support HTTP/2 (h2c) in a way that gRPC needs.
#!/usr/bin/env bash | |
image_archive="image-archive.tar" | |
image_metadata="docker-image-metadata.tar.gz" | |
function cache_images() { | |
images_to_cache=$(docker images | awk '{print $3}' | grep -v '<none>' | tail -n +2 | while read line; do docker history -q $line | grep -v '<missing>'; done | uniq) | |
if [ -n "$images_to_cache" ]; then | |
printf "Saving the following images:\n$images_to_cache\n\n" |
from __future__ import absolute_import, division, print_function, unicode_literals | |
from multiprocessing.dummy import Pool as ThreadPool | |
from multiprocessing import Lock | |
from threading import get_ident | |
class SingletonType(type): | |
def __new__(mcs, name, bases, attrs): | |
# Assume the target class is created (i.e. this method to be called) in the main thread. |
#user nobody; | |
worker_processes 1; | |
#error_log logs/error.log; | |
#error_log logs/error.log notice; | |
#error_log logs/error.log info; | |
error_log /tmp/nginx_debug_error.log debug; | |
#pid logs/nginx.pid; |
##Install AWS CLI Tools##
- Install AWS CLI Tools. You can also use the EC2 API Tool if you are more comfortable with them. But this write-up uses the EC2 CLI.
- Create a user via Amazon IAM or download the security accessID and securitykey you will need it to query Amazon CLI.
- using Terminal cd into .aws directory
cd ~/.aws
edit or create new file namedconfig
paste the following contents inside.- `[default]`
- `aws_access_key_id = ACCESS_ID`
- `aws_secret_access_key = SECRET_ID`
- `output = json OR bson OR text`
- `region = PREFERRED_AWS_REGION`
Save the file as "config"
require 'dotenv' | |
Dotenv.load | |
listen ENV.fetch('UNICORN_PORT', 5000), :backlog => ENV.fetch('UNICORN_BACKLOG', 200).to_i | |
worker_processes ENV.fetch('UNICORN_CONCURRENCY', 3).to_i | |
timeout ENV.fetch('UNICORN_TIMEOUT', 15).to_i | |
preload_app true | |
if ENV.include?('UNICORN_LOG') | |
stderr_path ENV.fetch('UNICORN_LOG') |
This howto describes installing entware for the Tomato open-source router firmware.
- USB stick - 1G or more in size
- USB-capable router running TomatoUSB.
Moved to git-repository: https://github.com/denji/awesome-http-benchmark
Located in alphabetical order (not prefer)
- ab – slow and single threaded, written in
C
- apib – most of the features of ApacheBench (
ab
), also designed as a more modern replacement, written inC
- autocannon – fast HTTP/1.1 benchmarking tool written in Node.js
- baloo – Expressive end-to-end HTTP API testing made easy, written in Go (
golang
)