Vector With ClickHouse | 8lovelife's life
0%

Vector With ClickHouse

文章主要记录我是如何使用Vector获取系统观测数据,并将观测数据存储到Clickhouse中

Clickhouse

Clickhouse由C++语言实现,是一种快速的面向列存储的OLAP数据库管理系统,支持通过SQL查询并实时生成分析报表

Clickhouse Server

创建 Clickhouse Server,docker-compose.yml 内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
version: "3"

volumes:
clickhouse_data:
clickhouse_conf:
networks:
default:
external: true
name: monitor_monitor-net

services:
server:
image: yandex/clickhouse-server
container_name: clickhouse_service
volumes:
- clickhouse_conf:/etc/clickhouse-server
- clickhouse_data:/var/lib/clickhouse
ports:
- 8123:8123
ulimits:
nofile:
soft: "262144"
hard: "262144"

数据表

创建数据存储所需表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
docker run -it --rm  --network monitor_monitor-net \
yandex/clickhouse-client -h clickhouse_service --port 9000

CREATE TABLE host_metrics_memory (
namespace String,
name String,
host String,
kind String,
collector String,
metric_name String,
metric_type String,
metric_value Float64,
timestamp Datetime64
)ENGINE=MergeTree()
ORDER BY timestamp;

Vector

Vector由Rust语言实现,用于构建可观测数据的轻量级工具

1
2
3
4
docker run -d --network monitor_monitor-net \
-v ~/Projects/compose/vector/host_metrics.vector.toml:/etc/vector/vector.toml:ro \
-v /var/run/docker.sock:/var/run/docker.sock \
--name host_metrics timberio/vector:latest-debian

Grafana

使用Grafana将搜集的指标数据图表化,数据源使用Clickhouse

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
version: "3"

volumes:
grafana_data:
grafana_conf:
grafana_log:

networks:
default:
external: true
name: monitor_monitor-net

services:
grafana:
image: grafana/grafana
container_name: grafana_service
volumes:
- grafana_data:/var/lib/grafana
- grafana_conf:/etc/grafana
- grafana_log:/var/log/grafana
environment:
- GF_INSTALL_PLUGINS=vertamedia-clickhouse-datasource
ports:
- 3000:3000
labels:
org.label-schema.group: "monitoring"

附录

host_metrics.vector.toml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
[sources.host_metrics]
type = "host_metrics"
collectors = ["cpu", "disk", "filesystem", "load", "host", "memory", "network"]
scrape_interval_secs = 60


[transforms.metrics_to_logs]
type = "metric_to_log"
inputs = ["host_metrics"]


[transforms.remap]
type = "remap"
inputs = ["metrics_to_logs"]
source = '''
ts,_= format_timestamp(.timestamp, "%F %T")
.metric_value = .gauge.value
.metric_type = "gauge"
del(.gauge)
if .metric_value == null {
.metric_value = .counter.value
.metric_type = "counter"
del(.counter)
}
.metric_name = del(.name)
.timestamp = ts
.collector = .tags.collector
if .tags.collector == "cpu" {
.cpu= .tags.cpu
.mode= .tags.mode
} else if (.tags.collector == "filesystem"){
.filesystem= .tags.filesystem
.device = .tags.device
} else if (.tags.collector == "network" || .tags.collector == "disk"){
.device = .tags.device
}
del(.tags)
'''

[transforms.metrics_router]
type = "route"
inputs = ["remap"]
route.host='.collector == "host"'
route.memory='.collector == "memory"'
route.cpu='.collector == "cpu"'
route.network='.collector == "network"'
route.filesystem='.collector == "filesystem"'
route.disk='.collector == "disk"'

[sinks.memory_clickhouse]
type = "clickhouse"
inputs = ["metrics_router.memory"]
compression = "gzip"
endpoint = "http://clickhouse_service:8123"
table = "host_metrics_memory"
batch.timeout_secs = 5

[sinks.cpu_clickhouse]
type = "clickhouse"
inputs = ["metrics_router.cpu"]
compression = "gzip"
endpoint = "http://clickhouse_service:8123"
table = "host_metrics_cpu"
batch.timeout_secs = 5

[sinks.host_clickhouse]
type = "clickhouse"
inputs = ["metrics_router.host"]
compression = "gzip"
endpoint = "http://clickhouse_service:8123"
table = "host_metrics_host"
batch.timeout_secs = 5

[sinks.network_clickhouse]
type = "clickhouse"
inputs = ["metrics_router.network"]
compression = "gzip"
endpoint = "http://clickhouse_service:8123"
table = "host_metrics_network"
batch.timeout_secs = 5

[sinks.filesystem_clickhouse]
type = "clickhouse"
inputs = ["metrics_router.filesystem"]
compression = "gzip"
endpoint = "http://clickhouse_service:8123"
table = "host_metrics_filesystem"
batch.timeout_secs = 5

[sinks.disk_clickhouse]
type = "clickhouse"
inputs = ["metrics_router.disk"]
compression = "gzip"
endpoint = "http://clickhouse_service:8123"
table = "host_metrics_disk"
batch.timeout_secs = 5

host、cpu、filesystem、network、disk

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
CREATE TABLE host_metrics_host (
namespace String,
name String,
host String,
kind String,
collector String,
metric_name String,
metric_type String,
metric_value Float64,
timestamp Datetime64
)ENGINE=MergeTree()
ORDER BY timestamp;


CREATE TABLE host_metrics_cpu (
namespace String,
name String,
host String,
kind String,
collector String,
cpu String,
mode String,
metric_name String,
metric_type String,
metric_value Float64,
timestamp Datetime64
)ENGINE=MergeTree()
ORDER BY timestamp;


CREATE TABLE host_metrics_filesystem (
namespace String,
name String,
host String,
kind String,
collector String,
filesystem String,
device String,
metric_name String,
metric_type String,
metric_value Float64,
timestamp Datetime64
)ENGINE=MergeTree()
ORDER BY timestamp;


CREATE TABLE host_metrics_network (
namespace String,
name String,
host String,
kind String,
collector String,
device String,
metric_name String,
metric_type String,
metric_value Float64,
timestamp Datetime64
)ENGINE=MergeTree()
ORDER BY timestamp;


CREATE TABLE host_metrics_disk (
namespace String,
name String,
host String,
kind String,
collector String,
device String,
metric_name String,
metric_type String,
metric_value Float64,
timestamp Datetime64
)ENGINE=MergeTree()
ORDER BY timestamp;

There’s nothing left to do but tell stories. - Noodles

Once Upon a Time in America