- Network bandwidth rate
- Network latency
- Packet loss rate
cat /usr/lib/tuned/ceph-tuned/tuned.conf | |
[main] | |
summary=ceph_perf | |
[cpu] | |
governor=performance | |
energy_perf_bias=performance | |
min_perf_pct=100 | |
force_latency=1 |
""" | |
Copyright (C) 2018 Interactive Brokers LLC. All rights reserved. This code is subject to the terms | |
and conditions of the IB API Non-Commercial License or the IB API Commercial License, as applicable. | |
""" | |
import sys | |
from ibapi.contract import * | |
## Consumer Throughput: Single consumer thread, no compression | |
## Consumer Throughput: 3 consumer thread, no compression | |
bin/kafka-consumer-perf-test.sh --topic benchmark-3-3-none \ | |
--zookeeper kafka-zk-1:2181,kafka-zk-2:2181,kafka-zk-3:2181 \ | |
--messages 15000000 \ | |
--threads 1 |
bin/kafka-topics.sh --zookeeper localhost:2181 --list
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
... wait a minute ...
Flame graphs are a nifty debugging tool to determine where CPU time is being spent. Using the Java Flight recorder, you can do this for Java processes without adding significant runtime overhead.
Shivaram Venkataraman and I have found these flame recordings to be useful for diagnosing coarse-grained performance problems. We started using them at the suggestion of Josh Rosen, who quickly made one for the Spark scheduler when we were talking to him about why the scheduler caps out at a throughput of a few thousand tasks per second. Josh generated a graph similar to the one below, which illustrates that a significant amount of time is spent in serialization (if you click in the top right hand corner and search for "serialize", you can see that 78.6% of the sampled CPU time was spent in serialization). We used this insight to spee
# A simple 'hello world' application with python uwsgi and nginx | |
# Make directory and files | |
# The directory structure: | |
# ./ | |
# |-- app/ | |
# | |-- hello.py | |
# | |-- hello_nginx.conf | |
if [ -d app ]; then | |
rm -rf app |
USER='admin' | |
PASS='admin' | |
CLUSTER='dev' | |
HOST=$(hostname -f):8080 | |
function start(){ | |
curl -u $USER:$PASS -i -H 'X-Requested-By: ambari' -X PUT -d \ | |
'{"RequestInfo": {"context" :"Start '"$1"' via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' \ | |
http://$HOST/api/v1/clusters/$CLUSTER/services/$1 | |
} |
$ knife ssh -m "...every host in the network..." "sudo netstat -nutap" -a hostname > meganetstat.txt | |
$ python | |
>>> from collections import Counter as C | |
>>> HS = "...every host in the network...".split() | |
>>> ip = lambda s: s.split(":")[0] | |
>>> xs = [map(ip, [x[0], x[4], x[5]]) for x in [x.strip().split() for x in open("meganetstat.txt").readlines() if "tcp" in x] if len(x)>=6] | |
>>> ipmap = [(h, C([x[1] for x in xs if x[0] == h])) for h in HS] | |
>>> ipmapx = dict([(sorted([(x,y) for (x,y) in ip[1].items() if x.startswith("10.")], key=lambda t: -t[1])[0][0], ip[0]) for ip in ipmap]) | |
>>> sorted(C(map(ipmapx.get, [x[2] for x in xs if x[2].startswith("10.")])).items(), key=lambda t: t[1]) |
#!/usr/bin/env sh | |
# Download lists, unpack and filter, write to stdout | |
curl -s https://www.iblocklist.com/lists.php \ | |
| sed -n "s/.*value='\(http:.*=bt_.*\)'.*/\1/p" \ | |
| xargs wget -O - \ | |
| gunzip \ | |
| egrep -v '^#' |