Skip to content

Instantly share code, notes, and snippets.

@ttyler01
Created December 24, 2020 21:13
Show Gist options
  • Save ttyler01/6adb6c904fa5b36a926cc829799a0a80 to your computer and use it in GitHub Desktop.
Save ttyler01/6adb6c904fa5b36a926cc829799a0a80 to your computer and use it in GitHub Desktop.
Basic setups I'm using to monitor Speedify bonded DSL at my house
# [email protected]
# Compose file I'm using to set up Prometheus and Grafana to visualise Speedify bonded DSL at my house
# Also using to visualize SmartThings stuff
# Not using InfluxDB at the moment for anything
networks:
dockernet:
driver: bridge
volumes:
prometheus_data: {}
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
ports:
- "9090:9090"
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention=720h'
- '--web.enable-lifecycle'
volumes:
- './prometheus.yml:/etc/prometheus/prometheus.yml'
- prometheus_data:/prometheus
networks:
- dockernet
influxdb:
image: hypriot/rpi-influxdb:latest
container_name: influxdb
restart: unless-stopped
ports:
- "8083:8083"
- "8086:8086"
- "8090:8090"
# env_file:
# - 'env.influxdb'
volumes:
# Data persistency
# sudo mkdir -p /srv/docker/influxdb/data
- /var/influxdb:/data
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: unless-stopped
ports:
- "3000:3000"
#env_file:
# - 'env.grafana'
user: "0"
links:
- influxdb
- prometheus
volumes:
# Data persistency
# sudo mkdir -p /var/grafana/data; chown 472:472 /var/grafana/data
- /var/grafana/data:/var/lib/grafana
networks:
- dockernet
# my global config
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
#external_labels:
# monitor: 'codelab-monitor'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- job_name: 'nodeexporter'
static_configs:
- targets: ['xxx.xxx.xxx.90:9100', 'xxx.xxx.xxx.199:9100', 'xxx.xxx.xxx.192:9100']
- job_name: 'smartthings'
static_configs:
- targets: ['xxx.xxx.xxx.90:9499']
#- job_name: 'RPi2'
# static_configs:
# - targets: ['xxx.xxx.xxx.18:9100']
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment