Skip to content

Instantly share code, notes, and snippets.

@ruanbekker
Last active June 13, 2025 04:09
Show Gist options
  • Save ruanbekker/c6fa9bc6882e6f324b4319c5e3622460 to your computer and use it in GitHub Desktop.
Save ruanbekker/c6fa9bc6882e6f324b4319c5e3622460 to your computer and use it in GitHub Desktop.
Docker Container Logging using Promtail

Inspired By: grafana/loki#333

  • docker-compose.yml
version: "3"

networks:
  loki:

services:
  loki:
    image: grafana/loki:1.4.1
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      - loki

  promtail:
    image: grafana/promtail:1.4.1
    volumes:
      - /var/lib/docker/containers:/var/lib/docker/containers
      - /home/ubuntu/docker-config.yml:/etc/promtail/docker-config.yml
    command: -config.file=/etc/promtail/docker-config.yml
    networks:
      - loki

  grafana:
    image: grafana/grafana:master
    ports:
      - "3000:3000"
    networks:
      - loki
  • docker-config.yml
server:
  http_listen_address: 0.0.0.0
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:

- job_name: system
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs
      __path__: /var/log/*log

- job_name: containers
  entry_parser: raw

  static_configs:
  - targets:
      - localhost
    labels:
      job: containerlogs
      __path__: /var/lib/docker/containers/*/*log

  # --log-opt tag="{{.ImageName}}|{{.Name}}|{{.ImageFullID}}|{{.FullID}}"
  pipeline_stages:

  - json:
      expressions:
        stream: stream
        attrs: attrs
        tag: attrs.tag

  - regex:
      expression: (?P<image_name>(?:[^|]*[^|])).(?P<container_name>(?:[^|]*[^|])).(?P<image_id>(?:[^|]*[^|])).(?P<container_id>(?:[^|]*[^|]))
      source: "tag"

  - labels:
      tag:
      stream:
      image_name:
      container_name:
      image_id:
      container_id:
$ docker-compose up -d
$ docker run -itd --name nginxapp  -p 8080:80 --log-driver json-file --log-opt tag="{{.ImageName}}|{{.Name}}|{{.ImageFullID}}|{{.FullID}}" nginx
$ curl http://localhost:8080/?foo=bar

Screenshot:

Setup with less labels:

  • docker-config.yml
server:
  http_listen_address: 0.0.0.0
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
- job_name: containers
  entry_parser: raw

  static_configs:
  - targets:
      - localhost
    labels:
      job: containerlogs
      cluster: multipass-cluster
      __path__: /var/lib/docker/containers/*/*log

  # --log-opt tag="{{.Name}}"
  pipeline_stages:

  - json:
      expressions:
        stream: stream
        attrs: attrs
        tag: attrs.tag

  - regex:
      expression: (?P<container_name>(?:[^|]*[^|]))
      source: "tag"

  - labels:
      #tag:
      stream:
      container_name:
$ docker run -itd --name nginxapp3  -p 8080:80 --log-driver json-file --log-opt tag="{{.Name}}" nginx
$ curl -XGET -A "Mozilla" --refer http://bot.com/scrape.html http://localhost:8080/?foo=barx

Screenshot:

@S0b1t
Copy link

S0b1t commented May 10, 2022

Hi.
Thanks for your code. I appreciate it !

The question is about multiline stage. I configured promtail with less labels. But loki does not shows all the logs, shows only the last row.

  • For example the actual log is:
warn: Ocelot.Responder.Middleware.ResponderMiddleware[0]
      requestId: GM:02, previousRequestId: no previous request id, message: Error Code: ConnectionToDownstreamServiceError Message: Error connecting to downstream service, exception: System.Net.Http.HttpRequestException: Connection refused (microservice:port)
       ---> System.Net.Sockets.SocketException (111): Connection refused
         at System.Net.Http.ConnectHelper.ConnectAsync(Func`3 callback, DnsEndPoint endPoint, HttpRequestMessage requestMessage, CancellationToken cancellationToken)
         --- End of inner exception stack trace ---
         at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
         at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
         at System.Net.Http.HttpClient.SendAsyncCore(HttpRequestMessage request, HttpCompletionOption completionOption, Boolean async, Boolean emitTelemetryStartStop, CancellationToken cancellationToken)
         at Ocelot.Requester.HttpClientHttpRequester.GetResponse(HttpContext httpContext) errors found in ResponderMiddleware. Setting error response for request path:/api/ping, request method: GET
  • Shows only:
 at Ocelot.Requester.HttpClientHttpRequester.GetResponse(HttpContext httpContext) errors found in ResponderMiddleware. Setting error response for request path:/api/ping, request method: GET

Could you help me about that?

@chrisbecke
Copy link

chrisbecke commented Oct 7, 2022

There is an approach the leaves the tag and the unnecessary any per-service logging: options: settings.

It requires an extra entry in daemon.json however - the addition of a log-opts:labels, or log-opts:labels-regex to allow the com.docker.* labels that docker attaches to containers when doing stack, service or compose deployments.

  "log-driver": "json-file",
  "log-opts": {
    "labels-regex": "^.+",
  }

Once this is done, one can verify the new labels being added to logs using --details

docker logs <container_id> --details

Now that these labels are present in the logs, promtails config can be:

server:
  http_listen_address: 0.0.0.0
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:

- job_name: containers
  static_configs:
  - targets:
      - localhost
    labels:
      job: containerlogs
      __path__: /var/lib/docker/containers/*/*log

  pipeline_stages:
  - json:
      expressions:
        log: log
        stream: stream
        time: time
        tag: attrs.tag
        compose_project: attrs."com.docker.compose.project"
        compose_service: attrs."com.docker.compose.service"
        stack_name: attrs."com.docker.stack.namespace"
        swarm_service_name: attrs."com.docker.swarm.service.name"
        swarm_task_name: attrs."com.docker.swarm.task.name"
  - regex:
      expression: "^/var/lib/docker/containers/(?P<container_id>.{12}).+/.+-json.log$"
      source: filename
  - timestamp:
      format: RFC3339Nano
      source: time
  - labels:
      stream:
      container_id:
      tag:
      compose_project:
      compose_service:
      stack_name:
      swarm_service_name:
      swarm_task_name:
  - output:
      source: log

This

  • sets the container_id from the path
  • sets the timestamp correctly
  • passes "tag" through so that it can be used for extra stack defined filtering
  • generates "compose_project", "compose_name" labels for docker compose deployed containers
  • generates "swarm_service_name", "swarm_task_name" for swarm services
  • generates "stack_name" for containers deployed from a stack.

@S0b1t
Copy link

S0b1t commented Oct 16, 2022

Thank you.
I'll try it and let you know the result

@asucrews
Copy link

@chrisbecke this works great.

Thanks!

@S0b1t
Copy link

S0b1t commented Jan 6, 2023

There is an approach the leaves the tag and the unnecessary any per-service logging: options: settings.

It requires an extra entry in daemon.json however - the addition of a log-opts:labels, or log-opts:labels-regex to allow the com.docker.* labels that docker attaches to containers when doing stack, service or compose deployments.

  "log-driver": "json-file",
  "log-opts": {
    "labels-regex": "^.+",
  }

Once this is done, one can verify the new labels being added to logs using --details

docker logs <container_id> --details

Now that these labels are present in the logs, promtails config can be:

server:
  http_listen_address: 0.0.0.0
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:

- job_name: containers
  static_configs:
  - targets:
      - localhost
    labels:
      job: containerlogs
      __path__: /var/lib/docker/containers/*/*log

  pipeline_stages:
  - json:
      expressions:
        log: log
        stream: stream
        time: time
        tag: attrs.tag
        compose_project: attrs."com.docker.compose.project"
        compose_service: attrs."com.docker.compose.service"
        stack_name: attrs."com.docker.stack.namespace"
        swarm_service_name: attrs."com.docker.swarm.service.name"
        swarm_task_name: attrs."com.docker.swarm.task.name"
  - regex:
      expression: "^/var/lib/docker/containers/(?P<container_id>.{12}).+/.+-json.log$"
      source: filename
  - timestamp:
      format: RFC3339Nano
      source: time
  - labels:
      stream:
      container_id:
      tag:
      compose_project:
      compose_service:
      stack_name:
      swarm_service_name:
      swarm_task_name:
  - output:
      source: log

This

  • sets the container_id from the path
  • sets the timestamp correctly
  • passes "tag" through so that it can be used for extra stack defined filtering
  • generates "compose_project", "compose_name" labels for docker compose deployed containers
  • generates "swarm_service_name", "swarm_task_name" for swarm services
  • generates "stack_name" for containers deployed from a stack.

Hi!
I tried but it doesn't work for me.

I solved it by multiline

@nighlabs
Copy link

nighlabs commented Feb 8, 2023

There is an approach the leaves the tag and the unnecessary any per-service logging: options: settings.

It requires an extra entry in daemon.json however - the addition of a log-opts:labels, or log-opts:labels-regex to allow the com.docker.* labels that docker attaches to containers when doing stack, service or compose deployments.

  "log-driver": "json-file",
  "log-opts": {
    "labels-regex": "^.+",
  }

Once this is done, one can verify the new labels being added to logs using --details

docker logs <container_id> --details

Now that these labels are present in the logs, promtails config can be:

server:
  http_listen_address: 0.0.0.0
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:

- job_name: containers
  static_configs:
  - targets:
      - localhost
    labels:
      job: containerlogs
      __path__: /var/lib/docker/containers/*/*log

  pipeline_stages:
  - json:
      expressions:
        log: log
        stream: stream
        time: time
        tag: attrs.tag
        compose_project: attrs."com.docker.compose.project"
        compose_service: attrs."com.docker.compose.service"
        stack_name: attrs."com.docker.stack.namespace"
        swarm_service_name: attrs."com.docker.swarm.service.name"
        swarm_task_name: attrs."com.docker.swarm.task.name"
  - regex:
      expression: "^/var/lib/docker/containers/(?P<container_id>.{12}).+/.+-json.log$"
      source: filename
  - timestamp:
      format: RFC3339Nano
      source: time
  - labels:
      stream:
      container_id:
      tag:
      compose_project:
      compose_service:
      stack_name:
      swarm_service_name:
      swarm_task_name:
  - output:
      source: log

This

* sets the container_id from the path

* sets the timestamp  correctly

* passes "tag" through so that it can be used for extra stack defined filtering

* generates "compose_project", "compose_name" labels for docker compose deployed containers

* generates "swarm_service_name", "swarm_task_name" for swarm services

* generates "stack_name" for containers deployed from a stack.

Thanks for your post! Freakin' awesome - great to use promtail and avoid the loki plugin. I had some minor tweaks for my setup but everything works like a dream. Hats off for ya!

Only thing I'll call out - you need a Docker CE version new than early 2021. They added the labels-regex option to the JSON logs about that time.

@Pl8tinium
Copy link

There is an approach the leaves the tag and the unnecessary any per-service logging: options: settings.

It requires an extra entry in daemon.json however - the addition of a log-opts:labels, or log-opts:labels-regex to allow the com.docker.* labels that docker attaches to containers when doing stack, service or compose deployments.

  "log-driver": "json-file",
  "log-opts": {
    "labels-regex": "^.+",
  }

Once this is done, one can verify the new labels being added to logs using --details

docker logs <container_id> --details

Now that these labels are present in the logs, promtails config can be:

server:
  http_listen_address: 0.0.0.0
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:

- job_name: containers
  static_configs:
  - targets:
      - localhost
    labels:
      job: containerlogs
      __path__: /var/lib/docker/containers/*/*log

  pipeline_stages:
  - json:
      expressions:
        log: log
        stream: stream
        time: time
        tag: attrs.tag
        compose_project: attrs."com.docker.compose.project"
        compose_service: attrs."com.docker.compose.service"
        stack_name: attrs."com.docker.stack.namespace"
        swarm_service_name: attrs."com.docker.swarm.service.name"
        swarm_task_name: attrs."com.docker.swarm.task.name"
  - regex:
      expression: "^/var/lib/docker/containers/(?P<container_id>.{12}).+/.+-json.log$"
      source: filename
  - timestamp:
      format: RFC3339Nano
      source: time
  - labels:
      stream:
      container_id:
      tag:
      compose_project:
      compose_service:
      stack_name:
      swarm_service_name:
      swarm_task_name:
  - output:
      source: log

This

  • sets the container_id from the path
  • sets the timestamp correctly
  • passes "tag" through so that it can be used for extra stack defined filtering
  • generates "compose_project", "compose_name" labels for docker compose deployed containers
  • generates "swarm_service_name", "swarm_task_name" for swarm services
  • generates "stack_name" for containers deployed from a stack.

working, ty!

@chanyshev
Copy link

entry_parser: raw
it does not work

@skl256
Copy link

skl256 commented Dec 10, 2023

@chrisbecke you can remove entry_parser: raw if you want to use new versions of promtail
@Pl8tinium thank you very much, best solution for me!!!

@ssaid
Copy link

ssaid commented Apr 29, 2024

@Pl8tinium dude you saved my life, no joke! thanks. I was dealing with the docker plugin but it locked all my containers, thanks to promtail and your answer i could implement the solution.

@bykof
Copy link

bykof commented Jul 16, 2024

The docker plugin has a hard dependency to a loki instance. So if the Loki instance is down, you aren't able to start your containers, which is an absolute nogo. I would rather miss the log messages than miss my running containers...

@myjconan
Copy link

myjconan commented Apr 9, 2025

@Pl8tinium Thank you so much!Great solution!

@myjconan
Copy link

myjconan commented Apr 9, 2025

@bykof I cant't agree more. Fatal Lesson! It's strongly not recommended to use loki plugin in docker ,especially in prod env.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment