running:
bash create-vod-hls.sh beach.mkv
will produce:
beach/
|- playlist.m3u8
|- 360p.m3u8
running:
bash create-vod-hls.sh beach.mkv
will produce:
beach/
|- playlist.m3u8
|- 360p.m3u8
Kong, Traefik, Caddy, Linkerd, Fabio, Vulcand, and Netflix Zuul seem to be the most common in microservice proxy/gateway solutions. Kubernetes Ingress is often a simple Ngnix, which is difficult to separate the popularity from other things.
This is just a picture of this link from March 2, 2019
Originally, I had included some other solution
version: '3' | |
services: | |
# FRONT | |
chronograf: | |
# Full tag list: https://hub.docker.com/r/library/chronograf/tags/ | |
image: chronograf | |
deploy: | |
replicas: 1 | |
placement: | |
constraints: |
There's so many way to send logs to an elk... logspout, filebeat, journalbeat, etc.
But docker has a gelf log driver and logstash a gelf input. So here we are.
Here is a docker-compose to test a full elk with a container sending logs via gelf.
docker run -d --name es elasticsearch
docker run -d --name logstash --link es:elasticsearch logstash -v /tmp/logstash.conf:/config-dir/logstash.conf logstash logstash -f /config-dir/logstash.conf
docker run --link es:elasticsearch -d kibana
LOGSTASH_ADDRESS=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' logstash)
# credit: http://haacked.com/archive/2014/07/28/github-flow-aliases/ | |
[user] | |
email = [email protected] | |
name = Salim KAYABASI | |
[core] | |
autocrlf = input | |
# editor = 'C:/Program Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin | |
editor = /Applications/Sublime\\ Text.app/Contents/SharedSupport/bin/subl -n -w | |
eol = lf | |
[color] |
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
As configured in my dotfiles.
start new:
tmux
start new with session name: