Last active
May 3, 2019 16:18
-
-
Save banks/778662bec80edd928c1c90fbeb8c453d to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Attaching to envoy_consul_1 | |
consul_1 | ==> Starting Consul agent... | |
consul_1 | ==> Consul agent running! | |
consul_1 | Version: 'v1.4.4-285-g19361f073-dev (19361f073+CHANGES)' | |
consul_1 | Node ID: '6d9e3780-cd4a-4520-c4a7-e59a58cb69a5' | |
consul_1 | Node name: 'a21b7272d6b9' | |
consul_1 | Datacenter: 'dc1' (Segment: '<all>') | |
consul_1 | Server: true (Bootstrap: false) | |
consul_1 | Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600) | |
consul_1 | Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302) | |
consul_1 | Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false | |
consul_1 | | |
consul_1 | ==> Log data will now stream in as it occurs: | |
consul_1 | | |
consul_1 | 2019/05/03 16:04:58 [DEBUG] agent: Using random ID "6d9e3780-cd4a-4520-c4a7-e59a58cb69a5" as node ID | |
consul_1 | 2019/05/03 16:04:58 [DEBUG] tlsutil: Update with version 1 | |
consul_1 | 2019/05/03 16:04:58 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1 | |
consul_1 | 2019/05/03 16:04:58 [DEBUG] tlsutil: IncomingRPCConfig with version 1 | |
consul_1 | 2019/05/03 16:04:58 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1 | |
consul_1 | 2019/05/03 16:04:58 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:6d9e3780-cd4a-4520-c4a7-e59a58cb69a5 Address:127.0.0.1:8300}] | |
consul_1 | 2019/05/03 16:04:58 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "") | |
consul_1 | 2019/05/03 16:04:58 [INFO] serf: EventMemberJoin: a21b7272d6b9.dc1 127.0.0.1 | |
consul_1 | 2019/05/03 16:04:58 [INFO] serf: EventMemberJoin: a21b7272d6b9 127.0.0.1 | |
consul_1 | 2019/05/03 16:04:58 [INFO] consul: Adding LAN server a21b7272d6b9 (Addr: tcp/127.0.0.1:8300) (DC: dc1) | |
consul_1 | 2019/05/03 16:04:58 [INFO] consul: Handled member-join event for server "a21b7272d6b9.dc1" in area "wan" | |
consul_1 | 2019/05/03 16:04:58 [WARN] raft: Heartbeat timeout from "" reached, starting election | |
consul_1 | 2019/05/03 16:04:58 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2 | |
consul_1 | 2019/05/03 16:04:58 [DEBUG] raft: Votes needed: 1 | |
consul_1 | 2019/05/03 16:04:58 [DEBUG] raft: Vote granted from 6d9e3780-cd4a-4520-c4a7-e59a58cb69a5 in term 2. Tally: 1 | |
consul_1 | 2019/05/03 16:04:58 [INFO] raft: Election won. Tally: 1 | |
consul_1 | 2019/05/03 16:04:58 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state | |
consul_1 | 2019/05/03 16:04:58 [INFO] consul: cluster leadership acquired | |
consul_1 | 2019/05/03 16:04:58 [INFO] consul: New leader elected: a21b7272d6b9 | |
consul_1 | 2019/05/03 16:04:58 [INFO] connect: initialized primary datacenter CA with provider "consul" | |
consul_1 | 2019/05/03 16:04:58 [DEBUG] consul: Skipping self join check for "a21b7272d6b9" since the cluster is too small | |
consul_1 | 2019/05/03 16:04:58 [INFO] consul: member 'a21b7272d6b9' joined, marking health alive | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent.manager: added local registration for service "s1-sidecar-proxy" | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent.manager: added local registration for service "s2-sidecar-proxy" | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp) | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp) | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent/proxy: managed Connect proxy manager started | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: Started HTTP server on [::]:8500 (tcp) | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: started state syncer | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] connect: Sign start e5fe8006-0aa9-c4f3-8ee6-af3fd493fc9e | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {intentions 0xc00008b700 {false 0s 1} <nil>} | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {upstream:service:s2 0xc00008bb80 {false 0s 12} <nil>} | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] connect: Sign start c52157c0-202a-007a-984a-12f5d80d3241 | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s2-sidecar-proxy]: update {roots 0xc0004e1c20 {false 0s 11} <nil>} | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s2-sidecar-proxy]: update {intentions 0xc00008b0c0 {false 0s 1} <nil>} | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: Started gRPC server on [::]:8502 (tcp) | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: Synced service "s1" | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] connect: Sign end e5fe8006-0aa9-c4f3-8ee6-af3fd493fc9e took 140.7µs | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: Synced service "s1-sidecar-proxy" | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {leaf 0xc0000d1a70 {false 0s 14} <nil>} | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {upstream:service:s2 0xc00008bf80 {false 0s 16} <nil>} | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: Synced service "s2" | |
consul_1 | 2019/05/03 16:04:59 [INFO] agent: Synced service "s2-sidecar-proxy" | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Check "service:s1-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Check "service:s1-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Check "service:s2-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Check "service:s2-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Node info in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Service "s2-sidecar-proxy" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Service "s1" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Service "s1-sidecar-proxy" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Service "s2" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Check "service:s1-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {upstream:service:s2 0xc000878dc0 {false 0s 18} <nil>} | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Check "service:s1-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Check "service:s2-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Check "service:s2-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] agent: Node info in sync | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] connect: Sign end c52157c0-202a-007a-984a-12f5d80d3241 took 55.8µs | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s2-sidecar-proxy]: update {leaf 0xc000898630 {false 0s 20} <nil>} | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s2-sidecar-proxy]: valid | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] proxycfg state[s2-sidecar-proxy]: send | |
consul_1 | 2019/05/03 16:04:59 [DEBUG] http: Request GET /v1/agent/service/s1-sidecar-proxy (1.9286ms) from=127.0.0.1:34514 | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] http: Request GET /v1/agent/service/s2-sidecar-proxy (419.6µs) from=127.0.0.1:34516 | |
consul_1 | 2019/05/03 16:05:00 [WARN] agent: Check "service:s1-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21000: connect: connection refused | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] agent: Service "s1" in sync | |
consul_1 | 2019/05/03 16:05:00 [INFO] agent: Synced service "s1-sidecar-proxy" | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] agent: Service "s2" in sync | |
consul_1 | 2019/05/03 16:05:00 [INFO] agent: Synced service "s2-sidecar-proxy" | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] agent: Check "service:s2-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] agent: Check "service:s2-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] agent: Check "service:s1-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] agent: Check "service:s1-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:05:00 [DEBUG] agent: Node info in sync | |
consul_1 | 2019/05/03 16:05:01 [WARN] agent: Check "service:s2-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21001: connect: connection refused | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: starting process for proxy | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: got request node:<id:"s2-sidecar-proxy" cluster:"s2" build_version:"ea248e2919db841b4f3cc5e2c44dcbd90565467d/1.9.1/Clean/RELEASE/BoringSSL" > type_url:"type.googleapis.com/envoy.api.v2.Cluster" true | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: state init | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] proxycfg state[s2-sidecar-proxy]: req | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: got snapshot version 1, &{s2-sidecar-proxy 21001 {s2 s2 127.0.0.1 8181 map[envoy_prometheus_bind_addr:0.0.0.0:2345 protocol:http] []} 0xc000985080 0xc00004d0e0 map[]} | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: state pending | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: state running | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: got request node:<id:"s2-sidecar-proxy" cluster:"s2" build_version:"ea248e2919db841b4f3cc5e2c44dcbd90565467d/1.9.1/Clean/RELEASE/BoringSSL" > type_url:"type.googleapis.com/envoy.api.v2.Listener" true | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: state running | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: got request version_info:"00000001" node:<id:"s2-sidecar-proxy" cluster:"s2" build_version:"ea248e2919db841b4f3cc5e2c44dcbd90565467d/1.9.1/Clean/RELEASE/BoringSSL" > type_url:"type.googleapis.com/envoy.api.v2.Cluster" response_nonce:"00000001" true | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: state running | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: got request version_info:"00000001" node:<id:"s2-sidecar-proxy" cluster:"s2" build_version:"ea248e2919db841b4f3cc5e2c44dcbd90565467d/1.9.1/Clean/RELEASE/BoringSSL" > type_url:"type.googleapis.com/envoy.api.v2.Listener" response_nonce:"00000002" true | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: state running | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: starting process for proxy | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: got request node:<id:"s1-sidecar-proxy" cluster:"s1" build_version:"ea248e2919db841b4f3cc5e2c44dcbd90565467d/1.9.1/Clean/RELEASE/BoringSSL" > type_url:"type.googleapis.com/envoy.api.v2.Cluster" true | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] xds: state init | |
consul_1 | 2019/05/03 16:05:03 [DEBUG] proxycfg state[s1-sidecar-proxy]: req | |
consul_1 | 2019/05/03 16:05:10 [WARN] agent: Check "service:s1-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21000: connect: connection refused | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Check "service:s2-sidecar-proxy:1" is passing | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Service "s1" in sync | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Service "s1-sidecar-proxy" in sync | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Service "s2" in sync | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Service "s2-sidecar-proxy" in sync | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Check "service:s1-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:05:11 [INFO] agent: Synced check "service:s2-sidecar-proxy:1" | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Check "service:s2-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Check "service:s1-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] agent: Node info in sync | |
consul_1 | 2019/05/03 16:05:11 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {upstream:service:s2 0xc000879380 {false 0s 23} <nil>} | |
consul_1 | 2019/05/03 16:05:20 [WARN] agent: Check "service:s1-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21000: connect: connection refused | |
consul_1 | 2019/05/03 16:05:21 [DEBUG] agent: Check "service:s2-sidecar-proxy:1" is passing |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Attaching to envoy_consul_1 | |
consul_1 | ==> Starting Consul agent... | |
consul_1 | ==> Consul agent running! | |
consul_1 | Version: 'v1.4.4-285-g19361f073-dev (19361f073+CHANGES)' | |
consul_1 | Node ID: 'a76490be-6eda-cc7c-3853-42a8c6c4771b' | |
consul_1 | Node name: 'e0355df7658f' | |
consul_1 | Datacenter: 'dc1' (Segment: '<all>') | |
consul_1 | Server: true (Bootstrap: false) | |
consul_1 | Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600) | |
consul_1 | Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302) | |
consul_1 | Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false | |
consul_1 | | |
consul_1 | ==> Log data will now stream in as it occurs: | |
consul_1 | | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent: Using random ID "a76490be-6eda-cc7c-3853-42a8c6c4771b" as node ID | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] tlsutil: Update with version 1 | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1 | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] tlsutil: IncomingRPCConfig with version 1 | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1 | |
consul_1 | 2019/05/03 16:12:56 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:a76490be-6eda-cc7c-3853-42a8c6c4771b Address:127.0.0.1:8300}] | |
consul_1 | 2019/05/03 16:12:56 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "") | |
consul_1 | 2019/05/03 16:12:56 [INFO] serf: EventMemberJoin: e0355df7658f.dc1 127.0.0.1 | |
consul_1 | 2019/05/03 16:12:56 [INFO] serf: EventMemberJoin: e0355df7658f 127.0.0.1 | |
consul_1 | 2019/05/03 16:12:56 [INFO] consul: Handled member-join event for server "e0355df7658f.dc1" in area "wan" | |
consul_1 | 2019/05/03 16:12:56 [INFO] consul: Adding LAN server e0355df7658f (Addr: tcp/127.0.0.1:8300) (DC: dc1) | |
consul_1 | 2019/05/03 16:12:56 [WARN] raft: Heartbeat timeout from "" reached, starting election | |
consul_1 | 2019/05/03 16:12:56 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2 | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] raft: Votes needed: 1 | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] raft: Vote granted from a76490be-6eda-cc7c-3853-42a8c6c4771b in term 2. Tally: 1 | |
consul_1 | 2019/05/03 16:12:56 [INFO] raft: Election won. Tally: 1 | |
consul_1 | 2019/05/03 16:12:56 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state | |
consul_1 | 2019/05/03 16:12:56 [INFO] consul: cluster leadership acquired | |
consul_1 | 2019/05/03 16:12:56 [INFO] consul: New leader elected: e0355df7658f | |
consul_1 | 2019/05/03 16:12:56 [INFO] connect: initialized primary datacenter CA with provider "consul" | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] consul: Skipping self join check for "e0355df7658f" since the cluster is too small | |
consul_1 | 2019/05/03 16:12:56 [INFO] consul: member 'e0355df7658f' joined, marking health alive | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent.manager: added local registration for service "s1-sidecar-proxy" | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent.manager: added local registration for service "s2-sidecar-proxy" | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent/proxy: managed Connect proxy manager started | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: req &{ dc1 s2 0 0s} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: brand new leaf &{ dc1 s2 0 0s} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] proxycfg state[s2-sidecar-proxy]: update {intentions 0xc00043c680 {false 0s 1} <nil>} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: req &{ dc1 s1 0 0s} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: brand new leaf &{ dc1 s1 0 0s} | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp) | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp) | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] connect: Sign start a3b1a2f2-9e0e-eb6e-41f6-145faa9c0f3f | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: Started HTTP server on [::]:8500 (tcp) | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: started state syncer | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: Started gRPC server on [::]:8502 (tcp) | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {upstream:service:s2 0xc0004e7c80 {false 0s 12} <nil>} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {intentions 0xc00043cf80 {false 0s 1} <nil>} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] connect: Sign start 651c10d4-3538-84fd-192f-74298208e830 | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: Synced service "s2-sidecar-proxy" | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {upstream:service:s2 0xc00039a080 {false 0s 14} <nil>} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] connect: Sign end a3b1a2f2-9e0e-eb6e-41f6-145faa9c0f3f took 36.6µs | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: Synced service "s1" | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] proxycfg state[s2-sidecar-proxy]: update {leaf 0xc0004cda70 {false 0s 15} <nil>} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: req &{ dc1 s2 0 0s} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: roots changed, active leaf has key &{ dc1 s2 0 0s} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: roots changed, active leaf has key &{ dc1 s2 0 0s} | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: Synced service "s1-sidecar-proxy" | |
consul_1 | 2019/05/03 16:12:56 [INFO] agent: Synced service "s2" | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent: Check "service:s1-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent: Check "service:s1-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent: Check "service:s2-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent: Check "service:s2-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] agent: Node info in sync | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] connect: Sign end 651c10d4-3538-84fd-192f-74298208e830 took 37.6µs | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: req &{ dc1 s1 0 0s} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] proxycfg state[s1-sidecar-proxy]: update {leaf 0xc000426480 {false 0s 20} <nil>} | |
consul_1 | 2019/05/03 16:12:56 [DEBUG] leaf cache: roots changed, active leaf has key &{ dc1 s1 0 0s} | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] http: Request GET /v1/agent/service/s1-sidecar-proxy (2.8945ms) from=127.0.0.1:34624 | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Service "s1" in sync | |
consul_1 | 2019/05/03 16:12:57 [INFO] agent: Synced service "s1-sidecar-proxy" | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Service "s2" in sync | |
consul_1 | 2019/05/03 16:12:57 [INFO] agent: Synced service "s2-sidecar-proxy" | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Check "service:s1-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Check "service:s1-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Check "service:s2-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Check "service:s2-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Node info in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Service "s1-sidecar-proxy" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Service "s2" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Service "s2-sidecar-proxy" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Service "s1" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Check "service:s1-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Check "service:s2-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Check "service:s2-sidecar-proxy:2" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Check "service:s1-sidecar-proxy:1" in sync | |
consul_1 | 2019/05/03 16:12:57 [DEBUG] agent: Node info in sync | |
consul_1 | 2019/05/03 16:12:58 [DEBUG] http: Request GET /v1/agent/service/s2-sidecar-proxy (529.4µs) from=127.0.0.1:34628 | |
consul_1 | 2019/05/03 16:12:59 [WARN] agent: Check "service:s2-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21001: connect: connection refused | |
consul_1 | 2019/05/03 16:13:01 [DEBUG] xds: starting process for proxy | |
consul_1 | 2019/05/03 16:13:01 [DEBUG] xds: got request node:<id:"s2-sidecar-proxy" cluster:"s2" build_version:"ea248e2919db841b4f3cc5e2c44dcbd90565467d/1.9.1/Clean/RELEASE/BoringSSL" > type_url:"type.googleapis.com/envoy.api.v2.Cluster" true | |
consul_1 | 2019/05/03 16:13:01 [DEBUG] xds: state init | |
consul_1 | 2019/05/03 16:13:01 [DEBUG] proxycfg state[s2-sidecar-proxy]: req | |
consul_1 | 2019/05/03 16:13:01 [DEBUG] xds: starting process for proxy | |
consul_1 | 2019/05/03 16:13:01 [DEBUG] xds: got request node:<id:"s1-sidecar-proxy" cluster:"s1" build_version:"ea248e2919db841b4f3cc5e2c44dcbd90565467d/1.9.1/Clean/RELEASE/BoringSSL" > type_url:"type.googleapis.com/envoy.api.v2.Cluster" true | |
consul_1 | 2019/05/03 16:13:01 [DEBUG] xds: state init | |
consul_1 | 2019/05/03 16:13:01 [DEBUG] proxycfg state[s1-sidecar-proxy]: req | |
consul_1 | 2019/05/03 16:13:04 [WARN] agent: Check "service:s1-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21000: connect: connection refused | |
consul_1 | 2019/05/03 16:13:09 [WARN] agent: Check "service:s2-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21001: connect: connection refused | |
consul_1 | 2019/05/03 16:13:14 [WARN] agent: Check "service:s1-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21000: connect: connection refused | |
consul_1 | 2019/05/03 16:13:19 [WARN] agent: Check "service:s2-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21001: connect: connection refused |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment