博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
prometheus + influxdb + grafana + mysql
阅读量:6489 次
发布时间:2019-06-24

本文共 14562 字,大约阅读时间需要 48 分钟。

hot3.png

前言

本文介绍使用influxdb 作为prometheus持久化存储和使用mysql 作为grafana 持久化存储的安装方法

一 安装go环境

如果自己有go环境可以自主编译remote_storage_adapter插件,安装go环境目的就是为了获得此插件,如果没有go环境可以使用我分享的连接下载。

链接: https://pan.baidu.com/s/1DJpoYDOIfCeAFC6UGY22Xg 提取码: uj42

1 下载  

wget https://storage.googleapis.com/golang/go1.8.3.linux-amd64.tar.gz

2 安装

tar -C /usr/local -xzf go1.8.3.linux-amd64.tar.gz添加环境变量vim /etc/profileexport GOROOT=/usr/local/go export GOBIN=$GOROOT/binexport GOPKG=$GOROOT/pkg/tool/linux_amd64 export GOARCH=amd64export GOOS=linuxexport GOPATH=/goexport PATH=$PATH:$GOBIN:$GOPKG:$GOPATH/binvim /etc/profilego get -d -v

二  安装  influxdb  

1 下载并安装

wget https://dl.influxdata.com/influxdb/releases/influxdb-1.5.2.x86_64.rpmsudo yum localinstall influxdb-1.5.2.x86_64.rpm

2 启动influxdb  

systemctl start influxdbsystemctl enable influxdb以非服务方式启动influxd需要指定配置文件的话,可以使用 --config 选项,具体可以help下看看

3 查看相关配置

安装后, 在/usr/bin下面有如下文件

influxd          influxdb服务器influx           influxdb命令行客户端influx_inspect   查看工具influx_stress    压力测试工具influx_tsm       数据库转换工具(将数据库从b1或bz1格式转换为tsm1格式)

在 /var/lib/influxdb/下面会有如下文件夹

data            存放最终存储的数据,文件以.tsm结尾meta            存放数据库元数据wal             存放预写日志文件

配置文件路径 :/etc/influxdb/influxdb.conf

4 创建http接口用于普罗米修斯

如何进入到db中

influx

如何创建一个prometheus库http 接口

curl -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE prometheus"

三 安装prometheus

1 下载 

https://prometheus.io/download/

2 解压安装

tar xf prometheus-2.8.0.linux-amd64.tar.gzmv prometheus-2.8.0.linux-amd64 /usr/local/prometheuscd /usr/local/prometheus./prometheus --version

四 准备remote_storage_adapter

在github上准备一个  的可执行文件,然后启动它,如果想获取相应的帮助可以使用:./remote_storage_adapter -h来获取相应帮助(修改绑定的端口,influxdb的设置等..),现在我们启动一个remote_storage_adapter来对接influxdb和prometheus:

./remote_storage_adapter -influxdb-url= -influxdb.database=prometheus -influxdb.retention-policy=autogen,influxdb默认绑定的端口为9201

1 build 插件

/usr/local/go/bin/go get github.com/prometheus/documentation/examples/remote_storage/remote_storage_adapter/

2 使用插件

./remote_storage_adapter --influxdb-url=http://127.0.0.1:8086/ --influxdb.database="prometheus" --influxdb.retention-policy=autogen

3 修改prometheus文件

vim prometheus.yml 添加:#Remote write configuration (for Graphite, OpenTSDB, or InfluxDB).remote_write:   - url: "http://localhost:9201/write"# Remote read configuration (for InfluxDB only at the moment).remote_read:   - url: "http://localhost:9201/read"

4 启动 prometheus

./prometheus

5 查看有无数据上报

此时只监控了server本身

519913ca4d033bfa729b11faebfea8d4139.jpg

6 查看 influxdb 数据库内容

> show databases;name: databasesname----_internalmydbprometheus> use prometheusUsing database prometheus> SHOW MEASUREMENTSname: measurementsname----go_gc_duration_secondsgo_gc_duration_seconds_countgo_gc_duration_seconds_sumgo_goroutinesgo_infogo_memstats_alloc_bytesgo_memstats_alloc_bytes_totalgo_memstats_buck_hash_sys_bytesgo_memstats_frees_totalgo_memstats_gc_cpu_fractiongo_memstats_gc_sys_bytesgo_memstats_heap_alloc_bytesgo_memstats_heap_idle_bytesgo_memstats_heap_inuse_bytesgo_memstats_heap_objectsgo_memstats_heap_released_bytesgo_memstats_heap_sys_bytesgo_memstats_last_gc_time_secondsgo_memstats_lookups_totalgo_memstats_mallocs_totalgo_memstats_mcache_inuse_bytesgo_memstats_mcache_sys_bytesgo_memstats_mspan_inuse_bytesgo_memstats_mspan_sys_bytesgo_memstats_next_gc_bytesgo_memstats_other_sys_bytesgo_memstats_stack_inuse_bytesgo_memstats_stack_sys_bytesgo_memstats_sys_bytesgo_threadsnet_conntrack_dialer_conn_attempted_totalnet_conntrack_dialer_conn_closed_totalnet_conntrack_dialer_conn_established_totalnet_conntrack_dialer_conn_failed_totalnet_conntrack_listener_conn_accepted_totalnet_conntrack_listener_conn_closed_totalprocess_cpu_seconds_totalprocess_max_fdsprocess_open_fdsprocess_resident_memory_bytesprocess_start_time_secondsprocess_virtual_memory_bytesprocess_virtual_memory_max_bytesprometheus_api_remote_read_queriesprometheus_build_infoprometheus_config_last_reload_success_timestamp_secondsprometheus_config_last_reload_successfulprometheus_engine_queriesprometheus_engine_queries_concurrent_maxprometheus_engine_query_duration_secondsprometheus_engine_query_duration_seconds_countprometheus_engine_query_duration_seconds_sumprometheus_http_request_duration_seconds_bucketprometheus_http_request_duration_seconds_countprometheus_http_request_duration_seconds_sumprometheus_http_response_size_bytes_bucketprometheus_http_response_size_bytes_countprometheus_http_response_size_bytes_sumprometheus_notifications_alertmanagers_discoveredprometheus_notifications_dropped_totalprometheus_notifications_queue_capacityprometheus_notifications_queue_lengthprometheus_remote_storage_dropped_samples_totalprometheus_remote_storage_enqueue_retries_totalprometheus_remote_storage_failed_samples_totalprometheus_remote_storage_highest_timestamp_in_secondsprometheus_remote_storage_pending_samplesprometheus_remote_storage_queue_highest_sent_timestamp_secondsprometheus_remote_storage_remote_read_queriesprometheus_remote_storage_retried_samples_totalprometheus_remote_storage_samples_in_totalprometheus_remote_storage_sent_batch_duration_seconds_bucketprometheus_remote_storage_sent_batch_duration_seconds_countprometheus_remote_storage_sent_batch_duration_seconds_sumprometheus_remote_storage_shard_capacityprometheus_remote_storage_shardsprometheus_remote_storage_succeeded_samples_totalprometheus_rule_evaluation_duration_seconds_countprometheus_rule_evaluation_duration_seconds_sumprometheus_rule_evaluation_failures_totalprometheus_rule_evaluations_totalprometheus_rule_group_duration_seconds_countprometheus_rule_group_duration_seconds_sumprometheus_rule_group_iterations_missed_totalprometheus_rule_group_iterations_totalprometheus_sd_azure_refresh_duration_seconds_countprometheus_sd_azure_refresh_duration_seconds_sumprometheus_sd_azure_refresh_failures_totalprometheus_sd_consul_rpc_duration_seconds_countprometheus_sd_consul_rpc_duration_seconds_sumprometheus_sd_consul_rpc_failures_totalprometheus_sd_discovered_targetsprometheus_sd_dns_lookup_failures_totalprometheus_sd_dns_lookups_totalprometheus_sd_ec2_refresh_duration_seconds_countprometheus_sd_ec2_refresh_duration_seconds_sumprometheus_sd_ec2_refresh_failures_totalprometheus_sd_file_read_errors_totalprometheus_sd_file_scan_duration_seconds_countprometheus_sd_file_scan_duration_seconds_sumprometheus_sd_gce_refresh_duration_countprometheus_sd_gce_refresh_duration_sumprometheus_sd_gce_refresh_failures_totalprometheus_sd_kubernetes_cache_last_resource_versionprometheus_sd_kubernetes_cache_list_duration_seconds_countprometheus_sd_kubernetes_cache_list_duration_seconds_sumprometheus_sd_kubernetes_cache_list_items_countprometheus_sd_kubernetes_cache_list_items_sumprometheus_sd_kubernetes_cache_list_totalprometheus_sd_kubernetes_cache_short_watches_totalprometheus_sd_kubernetes_cache_watch_duration_seconds_countprometheus_sd_kubernetes_cache_watch_duration_seconds_sumprometheus_sd_kubernetes_cache_watch_events_countprometheus_sd_kubernetes_cache_watch_events_sumprometheus_sd_kubernetes_cache_watches_totalprometheus_sd_kubernetes_events_totalprometheus_sd_marathon_refresh_duration_seconds_countprometheus_sd_marathon_refresh_duration_seconds_sumprometheus_sd_marathon_refresh_failures_totalprometheus_sd_openstack_refresh_duration_seconds_countprometheus_sd_openstack_refresh_duration_seconds_sumprometheus_sd_openstack_refresh_failures_totalprometheus_sd_received_updates_totalprometheus_sd_triton_refresh_duration_seconds_countprometheus_sd_triton_refresh_duration_seconds_sumprometheus_sd_triton_refresh_failures_totalprometheus_sd_updates_totalprometheus_target_interval_length_secondsprometheus_target_interval_length_seconds_countprometheus_target_interval_length_seconds_sumprometheus_target_scrape_pool_reloads_failed_totalprometheus_target_scrape_pool_reloads_totalprometheus_target_scrape_pool_sync_totalprometheus_target_scrape_pools_failed_totalprometheus_target_scrape_pools_totalprometheus_target_scrapes_exceeded_sample_limit_totalprometheus_target_scrapes_sample_duplicate_timestamp_totalprometheus_target_scrapes_sample_out_of_bounds_totalprometheus_target_scrapes_sample_out_of_order_totalprometheus_target_sync_length_secondsprometheus_target_sync_length_seconds_countprometheus_target_sync_length_seconds_sumprometheus_template_text_expansion_failures_totalprometheus_template_text_expansions_totalprometheus_treecache_watcher_goroutinesprometheus_treecache_zookeeper_failures_totalprometheus_tsdb_blocks_loadedprometheus_tsdb_checkpoint_creations_failed_totalprometheus_tsdb_checkpoint_creations_totalprometheus_tsdb_checkpoint_deletions_failed_totalprometheus_tsdb_checkpoint_deletions_totalprometheus_tsdb_compaction_chunk_range_seconds_bucketprometheus_tsdb_compaction_chunk_range_seconds_countprometheus_tsdb_compaction_chunk_range_seconds_sumprometheus_tsdb_compaction_chunk_samples_bucketprometheus_tsdb_compaction_chunk_samples_countprometheus_tsdb_compaction_chunk_samples_sumprometheus_tsdb_compaction_chunk_size_bytes_bucketprometheus_tsdb_compaction_chunk_size_bytes_countprometheus_tsdb_compaction_chunk_size_bytes_sumprometheus_tsdb_compaction_duration_seconds_bucketprometheus_tsdb_compaction_duration_seconds_countprometheus_tsdb_compaction_duration_seconds_sumprometheus_tsdb_compaction_populating_blockprometheus_tsdb_compactions_failed_totalprometheus_tsdb_compactions_totalprometheus_tsdb_compactions_triggered_totalprometheus_tsdb_head_active_appendersprometheus_tsdb_head_chunksprometheus_tsdb_head_chunks_created_totalprometheus_tsdb_head_chunks_removed_totalprometheus_tsdb_head_gc_duration_secondsprometheus_tsdb_head_gc_duration_seconds_countprometheus_tsdb_head_gc_duration_seconds_sumprometheus_tsdb_head_max_timeprometheus_tsdb_head_max_time_secondsprometheus_tsdb_head_min_timeprometheus_tsdb_head_min_time_secondsprometheus_tsdb_head_samples_appended_totalprometheus_tsdb_head_seriesprometheus_tsdb_head_series_created_totalprometheus_tsdb_head_series_not_found_totalprometheus_tsdb_head_series_removed_totalprometheus_tsdb_head_truncations_failed_totalprometheus_tsdb_head_truncations_totalprometheus_tsdb_lowest_timestampprometheus_tsdb_lowest_timestamp_secondsprometheus_tsdb_reloads_failures_totalprometheus_tsdb_reloads_totalprometheus_tsdb_size_retentions_totalprometheus_tsdb_storage_blocks_bytesprometheus_tsdb_symbol_table_size_bytesprometheus_tsdb_time_retentions_totalprometheus_tsdb_tombstone_cleanup_seconds_bucketprometheus_tsdb_tombstone_cleanup_seconds_countprometheus_tsdb_tombstone_cleanup_seconds_sumprometheus_tsdb_vertical_compactions_totalprometheus_tsdb_wal_completed_pages_totalprometheus_tsdb_wal_corruptions_totalprometheus_tsdb_wal_fsync_duration_seconds_countprometheus_tsdb_wal_fsync_duration_seconds_sumprometheus_tsdb_wal_page_flushes_totalprometheus_tsdb_wal_truncate_duration_seconds_countprometheus_tsdb_wal_truncate_duration_seconds_sumprometheus_tsdb_wal_truncations_failed_totalprometheus_tsdb_wal_truncations_totalprometheus_wal_watcher_current_segmentprometheus_wal_watcher_record_decode_failures_totalprometheus_wal_watcher_records_read_totalprometheus_wal_watcher_samples_sent_pre_tailing_totalpromhttp_metric_handler_requests_in_flightpromhttp_metric_handler_requests_totalscrape_duration_secondsscrape_samples_post_metric_relabelingscrape_samples_scrapedup

7 向添加一个node节点

下载wget https://github.com/prometheus/node_exporter/releases/download/v0.17.0/node_exporter-0.17.0.linux-amd64.tar.gz安装agenttar xf node_exporter-0.17.0.linux-amd64.tar.gzcd node_exporter-0.17.0.linux-amd64./node_exporter向prometheus 注册节点vim prometheus.yml scrape_configs下添加  - job_name: 'linux-node'             static_configs:    - targets: ['10.10.25.149:9100']           labels:          instance: node1重启 prometheus

8 将prometheus写成系统服务

cat>/lib/systemd/system/prometheus.service<

9 将agent写成系统服务

cat>/lib/systemd/system/node_exporter.service<

10 将remote_storage_adapter注册为系统服务

cat>/lib/systemd/system/remote_storage_adapter.service<

五 安装 grafana

1 下载 

wget https://dl.grafana.com/oss/release/grafana-6.0.2-1.x86_64.rpm

2 安装

yum install  grafana-6.0.2-1.x86_64.rpmsystemctl start grafana-serversystemctl enable grafana-servergrafana-server -v grafana-server 监听端口为 3000

3 访问 grafana-server 

http://ServerIP:3000默认用户名密码为: admin admin

六 安装 mysql

1 添加源

rpm -Uvh http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpmyum repolist enabled | grep "mysql.*-community.*"

2 安装 mysql-5.6

yum -y install mysql-community-server

3 启动mysql并简单安全设置

systemctl enable mysqldsystemctl start mysqldsystemctl status mysqldmysql_secure_installation 设置密码一路Y

4 创建grafana 数据库

create database grafana;create user grafana@'%' IDENTIFIED by 'grafana';  grant all on grafana.* to grafana@'%';  flush privileges;

七 修改grafana默认数据库并配置grafana

1 修改配置文件连接mysql

vim /etc/grafana/grafana.ini[database]type = mysqlhost = 127.0.0.1:3306name = grafanauser = grafanapassword =grafanaurl = mysql://grafana:grafana@localhost:3306/grafana[session]provider = mysqlprovider_config = `grafana:grafana@tcp(127.0.0.1:3306)/grafana`

2 重启grafana

systemctl restart grafana-server

3 访问grafana

http://serverip:3000

4 查看数据库

mysql> show databases;+--------------------+| Database           |+--------------------+| information_schema || grafana            || mysql              || performance_schema |+--------------------+mysql> use grafanaReading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -ADatabase changedmysql> show tables;+--------------------------+| Tables_in_grafana        |+--------------------------+| alert                    || alert_notification       || alert_notification_state || annotation               || annotation_tag           || api_key                  || dashboard                || dashboard_acl            || dashboard_provisioning   || dashboard_snapshot       || dashboard_tag            || dashboard_version        || data_source              || login_attempt            || migration_log            || org                      || org_user                 || playlist                 || playlist_item            || plugin_setting           || preferences              || quota                    || server_lock              || session                  || star                     || tag                      || team                     || team_member              || temp_user                || test_data                || user                     || user_auth                || user_auth_token          |+--------------------------+33 rows in set (0.00 sec)

5 配置 grafana 添加数据源

由于使用influxDB作为prometheus的持久化存储,所以添加的influxDB数据源,由于influxDB未设置密码所以此处没有填写密码

85ca251251baf64c9ab08c37c2102dafe7c.jpg

 

转载于:https://my.oschina.net/54188zz/blog/3034788

你可能感兴趣的文章
Bookmarking Widget插件,ZenCart插件
查看>>
mysql中输入中文数据报错Incorrect string的解决方法
查看>>
rails generate model跳过 migration
查看>>
第5课:基于案例一节课贯通Spark Streaming流计算框架的运行源码
查看>>
使用四大方案保障域名安全
查看>>
mycncart操作使用教程 - 文章管理
查看>>
建立Laravel工程的个人总结
查看>>
cp复制隐藏文件
查看>>
神奇的hasLayout
查看>>
JSP内置对象和EL内置对象
查看>>
MongoDB学习笔记(五) MongoDB文件存取操作
查看>>
Nginx 平滑重启
查看>>
诊断修复 TiDB Operator 在 K8s 测试中遇到的 Linux 内核问题
查看>>
jsp的两种include
查看>>
Objective-C语法之字符串NSString
查看>>
状态选择器小结
查看>>
Linux系统Codis集群搭建常见异常
查看>>
岁月如风,轻轻漂过
查看>>
2016-07-15
查看>>
【转】oracle查询不到表的问题
查看>>