[Guide] Easy migration from Zabbix to Glaber

Intro

After the announcement of open beta testing Glaber, some people asked me to provide the simplest migration guide. This article for those who have no previous knowledge with Clickhouse and have no time to read wiki. Feel free to ask me if I miss something.

Your current setup

Let’s imagine that you already have installed Zabbix 3.4 or 4.0 on your server. In our example it’ll be Ubuntu 18.04 but all commands quite similar for other distros.

Please shutdown Zabbix before continue
Make sure you have recent backup of database

Clickhouse

First of all you need to install latest version of Clickhouse:

sudo apt-get install dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4

echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" | sudo tee /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update

sudo apt-get install -y clickhouse-server clickhouse-client

sudo service clickhouse-server start
clickhouse-client --host 127.1

IMPORTANT NOTE for ukrainian users

Since Yandex is blocked in your country please use mirror from Altinity:
https://packagecloud.io/altinity/clickhouse/install

At this point you should be in clickhouse client and ready to create history & trend tables:

CREATE DATABASE zabbix;

CREATE TABLE zabbix.history (day Date, itemid UInt64, clock DateTime, ns UInt32, value Int64, value_dbl Float64, value_str String) ENGINE = MergeTree(day, (itemid, clock), 8192);

CREATE TABLE zabbix.history_buffer (day Date, itemid UInt64, clock DateTime, ns UInt32, value Int64, value_dbl Float64, value_str String) ENGINE = Buffer(zabbix, history, 8, 30, 60, 9000, 60000, 256000, 256000000);

CREATE MATERIALIZED VIEW zabbix.trends ENGINE = AggregatingMergeTree() PARTITION BY toYYYYMM(clock) ORDER BY (clock, itemid) AS SELECT toStartOfHour(clock) AS clock, itemid, count(value_dbl) AS num, min(value_dbl) AS value_min, avg(value_dbl) AS value_avg, max(value_dbl) AS value_max FROM zabbix.history GROUP BY clock,itemid;

CREATE MATERIALIZED VIEW zabbix.trends_uint ENGINE = AggregatingMergeTree() PARTITION BY toYYYYMM(clock) ORDER BY (clock, itemid) AS SELECT toStartOfHour(clock) AS clock, itemid, count(value) AS num, min(value) AS value_min, avg(value) AS value_avg, max(value) AS value_max FROM zabbix.history GROUP BY clock,itemid;

We created tables but didn’t specify address of clickhouse server in Glaber seetings yet.
Open zabbix_server.conf (by default path is /etc/zabbix/zabbix_server.conf) in your favorite editor and add following lines:

HistoryStorageURL=http://127.0.0.1:8123
HistoryStorageTypes=uint,dbl,str,text
HistoryStorageType=clickhouse
HistoryStorageDBName=zabbix

Right now our backend is ready to communicate with clickhouse.
It remains to do the same manipulations for frontend.
Open zabbix.conf.php (by default /usr/share/zabbix/conf/zabbix.conf.php) and add following lines:

global $HISTORY;

$HISTORY['storagetype']='clickhouse';
$HISTORY['url']   = 'http://127.0.0.1:8123';
$HISTORY['types'] = ['uint', 'dbl','str','text'];
$HISTORY['dbname'] = 'zabbix';

Asynchronous pollers

Default Clickhouse setup is ready and last thing that still not changed is synchronous pollers (they’re so slow…).
Open your zabbix_server.conf and add lines at the end:

StartPollersAsyncSNMP=10
StartPollersAsyncAGENT=5

That’s enough for most setup. If you have less than 10k NVPS it works like a charm. Otherwise, change those values to bigger.

Migrate existed history from MySQL to Clickhouse:

Clickhouse have official support to MySQL so all we need is execute some queries:

INSERT INTO zabbix.history SELECT toDate(clock) AS day, itemid, clock, value AS value_dbl, ns FROM mysql('localhost:3306', 'zabbix', 'history', 'user', 'password');
INSERT INTO zabbix.history SELECT toDate(clock) AS day, itemid, clock, value, ns FROM mysql('localhost:3306', 'zabbix', 'history_uint', 'user', 'password');
INSERT INTO zabbix.history SELECT toDate(clock) AS day, itemid, clock, value AS value_str, ns FROM mysql('localhost:3306', 'zabbix', 'history_str', 'user', 'password');
INSERT INTO zabbix.history SELECT toDate(clock) AS day, itemid, clock, value AS value_str, ns FROM mysql('localhost:3306', 'zabbix', 'history_text', 'user', 'password');
INSERT INTO zabbix.trends SELECT * FROM mysql('localhost:3306', 'zabbix', 'trends', 'user', 'password');
INSERT INTO zabbix.trends_uint SELECT * FROM mysql('localhost:3306', 'zabbix', 'trends_uint', 'user', 'password');

But for really big tables please check clickhouse settings (especially max_execution_time).
For PostgreSQL I’d recommend to use pg2ch.

Final

That’s all! Basic setup is ready to usage and it remains only to update the packages to Glaber:

apt-get update && apt-get install -y wget gnupg &&
wget https://glaber.io/repo/key/repo.gpg &&
apt-key add repo.gpg && rm repo.gpg &&
echo "deb https://glaber.io/repo/ubuntu bionic main" >> /etc/apt/sources.list.d/glaber.list &&
echo "deb https://repo.utelecom.com.ua/ubuntu bionic main" >> /etc/apt/sources.list.d/glaber.list &&
apt-get update &&
apt-get install -y zabbix-server-mysql zabbix-agent zabbix-frontend-php

Now restart zabbix-server, zabbix-agent and nginx (or apache) as usual.

Postscriptum

I’ll add new articles in near future for advanced features like nmap, HA.
Stay tuned for updates.