Skip to main content
Altcraft Docs LogoAltcraft Docs Logo
User guideDeveloper guideAdmin guide
Company siteHelp center
English
  • Русский
  • English
v73
  • v74
  • v73
  • v72
Login
  • Getting Started
  • Administrator documentation
  • Functional characteristics
  • Technology description
  • System requirements
  • Admin Panel
  • Platform installation
  • Platform configuration
    • Configuration file
    • Domain settings
    • LDAP access configuration
    • Sending Email via SMTP relay
    • Pixel and push domain configuration
    • Cluster and Replication Setup
    • System notifications configuration
    • Processes UNIX sockets configuration
    • HTTPS Configuration
    • External SQL database integration
    • Adding sender IP addresses
    • Deduplication request settings
    • PostgreSQL database for account data
    • Proxy server settings
    • Getting HTTP service statuses
    • Configuration MongoDB logs rotation
    • Configuration of system constants and directories
  • Platform maintenance
  • Custom channels guide
  • Extra
  • Processing HTTP/HTTPS traffic
  • Administrator API
This is documentation for Altcraft Platform v73. This documentation is no longer maintained.
The information for up-to-date platform version at this page is available (v74).
  • Platform configuration
  • Cluster and Replication Setup
Documentation for version v73

Cluster and Replication Setup

Deployment Scheme​

danger

For passwords and secret combinations, the character set abcdefghijklmnopqrstuvwxyzABCDEF is used. Be sure to generate a more complex password consisting of at least 32 characters.

MongoDB Replica Set Configuration​

According to the deployment scheme, the replica consists of nodes on hosts db1, db2, and db3.

info

Read more about the replication mechanism in the official documentation: https://www.mongodb.com/docs/manual/replication/

  1. Add a new parameter with the replica name to the configuration file [db1, db2, db3]:
replication:
replSetName: rs0

Example full configuration file for db1:

storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
wiredTiger:
engineConfig:
directoryForIndexes: true

replication:
replSetName: rs0

systemLog:
destination: file
logAppend: true
logRotate: reopen
path: /var/log/mongodb/mongod.log

net:
port: 27017
bindIp: "127.0.0.1,db1"
  1. After restarting MongoDB, connect to the database via MongoDB Shell and initialize the replica [db1]:
tip

Make sure that the instance you are connecting to has the Primary status.

> rs.initiate({
_id: "rs0",
version: 1,
members: [
{
_id: 0,
host: "db1:27017"
},
{
_id: 1,
host: "db2:27017"
},
{
_id: 2,
host: "db3:27017"
}
]
});
  1. Check the replication status [db1]:
rs0: PRIMARY > rs.status();
  1. Update the connection parameters in main.json configuration [app1, app2]:
{
"MONGO_AUTH_DB": "admin",
"CONTROLDB_NAME": "control",
"CONTROLDB_USER": "altcraft",
"CONTROLDB_PASS": "abcdefghijklmnopqrstuvwxyzABCDEF",
"CONTROLDB_REPL_SET_NAME": "rs0",
"CONTROLDB": [
{
"IP": "db1",
"PORT": 27017
},
{
"IP": "db2",
"PORT": 27017
},
{
"IP": "db3",
"PORT": 27017
}
],
"FILEDB_ENABLE": true,
"FILEDB_NAME": "filedb",
"FILEDB_USER": "altcraft",
"FILEDB_PASS": "abcdefghijklmnopqrstuvwxyzABCDEF",
"FILEDB_REPL_SET_NAME": "rs0",
"FILEDB": [
{
"IP": "db1",
"PORT": 27017
},
{
"IP": "db2",
"PORT": 27017
},
{
"IP": "db3",
"PORT": 27017
}
]
}

ClickHouse Cluster Setup​

According to the deployment diagram, the replica (a single replicated shard) and Zookeeper are hosted on db1, db2, and db3.

info

In this guide, ClickHouse Keeper is used instead of Zookeeper.

info

Read more about the replication mechanism in the official documentation:
https://clickhouse.com/docs/engines/table-engines/mergetree-family/replication/

  1. Add the cluster configuration with replicas [db1, db2, db3]:
cat <<EOF > /etc/clickhouse-server/config.d/clusters.xml
<?xml version="1.0" ?>
<clickhouse>
<remote_servers>
<altcraft>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>db1</host>
<port>9000</port>
</replica>
<replica>
<host>db2</host>
<port>9000</port>
</replica>
<replica>
<host>db3</host>
<port>9000</port>
</replica>
</shard>
</altcraft>
</remote_servers>
</clickhouse>
EOF
  1. Add the macros configuration [db1, db2, db3]:
tip

Macros are used in ClickHouse Keeper paths. The replica parameter on each host must match the current host’s name.

Example macros configuration for db1:

cat <<EOF > /etc/clickhouse-server/config.d/macros.xml
<?xml version="1.0" ?>
<clickhouse>
<macros>
<cluster>altcraft</cluster>
<replica>db1</replica>
<shard>01</shard>
</macros>
</clickhouse>
EOF

Example macros configuration for db2:

cat <<EOF > /etc/clickhouse-server/config.d/macros.xml
<?xml version="1.0" ?>
<clickhouse>
<macros>
<cluster>altcraft</cluster>
<replica>db2</replica>
<shard>01</shard>
</macros>
</clickhouse>
EOF

Example macros configuration for db3:

cat <<EOF > /etc/clickhouse-server/config.d/macros.xml
<?xml version="1.0" ?>
<clickhouse>
<macros>
<cluster>altcraft</cluster>
<replica>db3</replica>
<shard>01</shard>
</macros>
</clickhouse>
EOF
  1. Add ClickHouse Keeper configuration [db1, db2, db3].
tip

On each host, the server.id parameter must match the unique numeric identifier of that host.

Example configuration for db1:

cat <<EOF > /etc/clickhouse-server/config.d/keeper.xml
<?xml version="1.0" ?>
<clickhouse>
<keeper_server>
<tcp_port>9181</tcp_port>
<server_id>1</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>

<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<session_timeout_ms>30000</session_timeout_ms>
<rotate_log_storage_interval>10000</rotate_log_storage_interval>
</coordination_settings>

<raft_configuration>
<server>
<id>1</id>
<hostname>db1</hostname>
<port>9444</port>
</server>
<server>
<id>2</id>
<hostname>db2</hostname>
<port>9444</port>
</server>
<server>
<id>3</id>
<hostname>db3</hostname>
<port>9444</port>
</server>
</raft_configuration>
</keeper_server>

<zookeeper>
<node>
<host>db1</host>
<port>9181</port>
</node>
<node>
<host>db2</host>
<port>9181</port>
</node>
<node>
<host>db3</host>
<port>9181</port>
</node>
</zookeeper>

<distributed_ddl>
<path>/clickhouse/altcraft/task_queue/ddl</path>
</distributed_ddl>
</clickhouse>
EOF
  1. By default, ClickHouse and ClickHouse Keeper listen on localhost. To listen on all addresses, override the configuration [db1, db2, db3]:
cat <<EOF > /etc/clickhouse-server/config.d/settings.xml
<?xml version="1.0" ?>
<clickhouse>
<listen_host>0.0.0.0</listen_host>
</clickhouse>
EOF
  1. Update connection parameters in main.json configuration [app1, app2]:
{
"CLICKHOUSE_SYSTEM": {
"HOST": "db1",
"PORT": 9000,
"USER": "altcraft",
"PASSWORD": "abcdefghijklmnopqrstuvwxyzABCDEF",
"ALT_HOSTS": [
{
"HOST": "db2",
"PORT": 9000
},
{
"HOST": "db3",
"PORT": 9000
}
],
"REPLICATED": true,
"STRATEGY": "random"
}
}

SSDB Replication Configuration​

According to the deployment scheme, the replica consists of nodes on hosts db1, db2, and db3.

info

Read more about the replication mechanism in the official documentation: https://ideawu.github.io/ssdb-docs/replication.html

  1. To configure replication, add the replication parameter to the configuration file.
tip

Note: in the configuration file, you must use the tab character for indentation, not spaces.

Example configuration for db1:

replication:
binlog: yes
capacity: 100000000
sync_speed: -1
slaveof:
id: db2
type: mirror
host: db2
port: 4420
auth: abcdefghijklmnopqrstuvwxyzABCDEF
recv_timeout: 900
slaveof:
id: db3
type: mirror
host: db3
port: 4420
auth: abcdefghijklmnopqrstuvwxyzABCDEF
recv_timeout: 900

Example configuration for db2:

replication:
binlog: yes
capacity: 100000000
sync_speed: -1
slaveof:
id: db1
type: mirror
host: db1
port: 4420
auth: abcdefghijklmnopqrstuvwxyzABCDEF
recv_timeout: 900
slaveof:
id: db3
type: mirror
host: db3
port: 4420
auth: abcdefghijklmnopqrstuvwxyzABCDEF
recv_timeout: 900

Example configuration for db3:

replication:
binlog: yes
capacity: 100000000
sync_speed: -1
slaveof:
id: db1
type: mirror
host: db1
port: 4420
auth: abcdefghijklmnopqrstuvwxyzABCDEF
recv_timeout: 900
slaveof:
id: db2
type: mirror
host: db2
port: 4420
auth: abcdefghijklmnopqrstuvwxyzABCDEF
recv_timeout: 900
info

Repeat the configuration changes for the other SSDB instance, changing the password and port.

Repeat the configuration changes for the other SSDB instance, changing the password and port.

  1. After restarting SSDB, verify the replication status by running the command on [db1]:
echo info | /usr/local/ssdb/tools/ssdb-cli -h db1 -p 4420 -a abcdefghijklmnopqrstuvwxyzABCDEF

The values of binlogs.max_seq and replication.slaveof.last_seq must match. Example output for db1:

binlogs
capacity : 100000000
min_seq : 0
max_seq : 2
replication
client db2:34526
type : mirror
status : SYNC
last_seq : 2
client db3:34526
type : mirror
status : SYNC
last_seq : 2
replication
slaveof db2:4420
id : db2
type : mirror
status : SYNC
last_seq : 2
copy_count : 0
sync_count : 1
replication
slaveof db3:4420
id : db3
type : mirror
status : SYNC
last_seq : 2
copy_count : 0
sync_count : 1
  1. Update the connection parameters in the main.json configuration [app1, app2]:
{
"SSDB_HBSUPP": [
{
"IP": "db1",
"PORT": 4420,
"PASS": "abcdefghijklmnopqrstuvwxyzABCDEF"
},
{
"IP": "db2",
"PORT": 4420,
"PASS": "abcdefghijklmnopqrstuvwxyzABCDEF"
},
{
"IP": "db3",
"PORT": 4420,
"PASS": "abcdefghijklmnopqrstuvwxyzABCDEF"
}
],
"SSDB_NOTIFY": [
{
"IP": "db1",
"PORT": 4430,
"PASS": "abcdefghijklmnopqrstuvwxyzABCDEF"
},
{
"IP": "db2",
"PORT": 4430,
"PASS": "abcdefghijklmnopqrstuvwxyzABCDEF"
},
{
"IP": "db3",
"PORT": 4430,
"PASS": "abcdefghijklmnopqrstuvwxyzABCDEF"
}
]
}

Configuring the RabbitMQ Cluster​

According to the deployment scheme, the cluster consists of nodes on hosts db1, db2, and db3.

info

Read more about how the cluster works on this page: https://www.rabbitmq.com/clustering.html

  1. Update the contents of the /var/lib/rabbitmq/.erlang.cookie file [db1, db2, db3]:
printf "ABCDEFGHIJKLMN" > /var/lib/rabbitmq/.erlang.cookie
  1. After restarting RabbitMQ, join the cluster on [db2, db3]:
{
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit@db1
rabbitmqctl start_app
}
  1. Check the replication status [db1, db2, db3]:
rabbitmqctl cluster_status

Example output:

Cluster status of node rabbit@db1 ...
Basics

Cluster name: rabbit@db1

Disk Nodes

rabbit@db1
rabbit@db2
rabbit@db3

Running Nodes

rabbit@db1
rabbit@db2
rabbit@db3

Versions

rabbit@db1: RabbitMQ 3.10.6 on Erlang 24.3.4
rabbit@db2: RabbitMQ 3.10.6 on Erlang 24.3.4
rabbit@db3: RabbitMQ 3.10.6 on Erlang 24.3.4

Maintenance status

Node: rabbit@db1, status: not under maintenance
Node: rabbit@db2, status: not under maintenance
Node: rabbit@db3, status: not under maintenance

Alarms

(none)

Network Partitions

(none)

Listeners

Node: rabbit@db1, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@db1, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@db1, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Node: rabbit@db2, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@db2, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@db2, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Node: rabbit@db3, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@db3, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@db3, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0

Feature flags

Flag: classic_mirrored_queue_version, state: enabled
Flag: drop_unroutable_metric, state: disabled
Flag: empty_basic_get_metric, state: disabled
Flag: implicit_default_bindings, state: enabled
Flag: maintenance_mode_status, state: enabled
Flag: quorum_queue, state: enabled
Flag: stream_queue, state: enabled
Flag: user_limits, state: enabled
Flag: virtual_host_metadata, state: enabled
  1. To enable mirroring of all queues in all virtual hosts, run the following script [db1]:
vhosts=($(rabbitmqctl list_vhosts -q)); for item in "${vhosts[@]}"; do rabbitmqctl set_policy -p "$item" ha-all ".*" '{"ha-mode":"all","ha-sync-mode":"automatic"}'; done
  1. Update the connection parameters in main.json configuration [app1, app2]:
{
"RABBITMQ_CLUSTER": {
"USER": "altcraft",
"PASSWORD": "abcdefghijklmnopqrstuvwxyzABCDEF",
"FAIL_MODE": "failover",
"STRATEGY": "in_order",
"RECONNECT_TIME_PERIOD": 5,
"HOSTS": [
{
"HOST": "db1",
"PORT": 5672,
"WEIGHT": 1
},
{
"HOST": "db2",
"PORT": 5672,
"WEIGHT": 2
},
{
"HOST": "db3",
"PORT": 5672,
"WEIGHT": 3
}
]
}
}

Kvrocks Replication Configuration​

According to the deployment scheme, the replica consists of nodes on hosts db1, db2, and db3.

  1. Update the Kvrocks configuration by adding the masterauth parameter [db1, db2, db3]:
# kvrocks.conf
masterauth abcdefghijklmnopqrstuvwxyzABCDEF
  1. On the slave nodes, add the slaveof parameter specifying the master’s address [db2, db3]:
# kvrocks.conf
slaveof 192.168.0.10 6666

Where 192.168.0.10 is the address of db1.

  1. Install Redis Sentinel:
{
apt update
apt install -y redis-sentinel
}
  1. Add the Redis Sentinel configuration [db1, db2, db3]:
# sentinel.conf
bind 0.0.0.0
port 26379

daemonize yes
pidfile "/var/run/sentinel/redis-sentinel.pid"
logfile "/var/log/redis/redis-sentinel.log"

sentinel deny-scripts-reconfig yes

dir "/var/lib/redis"

sentinel monitor altcraft db1 6666 2
sentinel auth-pass altcraft abcdefghijklmnopqrstuvwxyzABCDEF
  1. After restarting Kvrocks and Redis Sentinel, connect to one of the nodes and check the replication status [db1, db2, db3]:
redis-cli -h db1 -p 6666 -a abcdefghijklmnopqrstuvwxyzABCDEF

Example output:

192.168.0.10:6666> INFO REPLICATION
# Replication
role:master
connected_slaves:2
slave0:ip=192.168.0.10,port=6666,offset=12068788,lag=0
slave1:ip=192.168.0.11,port=6666,offset=12068788,lag=0
master_repl_offset:12068788
  1. Update connection parameters in the main.json configuration [app1, app2]:
{
"CAMP_DUPLICATESDB": {
"MODE": "sentinel_replica",
"PASSWORD": "abcdefghijklmnopqrstuvwxyzABCDEF",
"NODES": [
{
"HOST": "db1",
"PORT": 26379
},
{
"HOST": "db2",
"PORT": 26379
},
{
"HOST": "db3",
"PORT": 26379
}
],
"SENTINEL": {
"MASTER_NAME": "altcraft"
}
}
}

Altcraft Platform Cluster Setup​

According to the deployment scheme, the application is deployed on the hosts app1 and app2.

  1. In cluster mode, only two services operate: proctask and procworkflow. Update the parameters in main.json configuration [app1, app2]:
{
"CLUSTER": {
"HOST_ID": "app1",
"PROC_TASKS": [
{
"HOST_ID": "app1",
"RPC_HOST": "0.0.0.0",
"RPC_HOST_CLI": "app1",
"RPC_PORT": 8962,
"FORCE_MASTER_ROLE": true
},
{
"HOST_ID": "app2",
"RPC_HOST": "0.0.0.0",
"RPC_HOST_CLI": "app2",
"RPC_PORT": 8962,
"FORCE_MASTER_ROLE": false
}
],
"PROC_WORKFLOW": [
{
"HOST_ID": "app1",
"RPC_HOST": "0.0.0.0",
"RPC_HOST_CLI": "app1",
"RPC_PORT": 7962,
"FORCE_MASTER_ROLE": true
},
{
"HOST_ID": "app2",
"RPC_HOST": "0.0.0.0",
"RPC_HOST_CLI": "app2",
"RPC_PORT": 7962,
"FORCE_MASTER_ROLE": false
}
]
}
}

Here:

  • HOST_ID — unique host identifier,
  • RPC_HOST — the address on which the service will accept connections,
  • RPC_HOST_CLI — the address clients will use to connect,
  • FORCE_MASTER_ROLE — parameter for forcing the master role.
  1. After restarting the services, the logs on [app1] should display the following lines:
{"level":"info","proc":"proctask","pid":306390,"hid":"app1","ver":"v2023.3.65.2060","file":"server/task_server.go:314","time":"2023-10-17T07:00:00Z","message":"Altcraft Platform [proctask] v2023.3.65.2060-g8377e3e65b ((HEAD) @Now app1 started on app1:8962, is master true"}
{"level":"info","proc":"procworkflow","pid":306827,"hid":"app1","ver":"v2023.3.65.2060","file":"workflow/controlworkflow.go:108","time":"2023-10-17T07:00:00Z","message":"Altcraft Platform [procworkflow] v2023.3.65.2060-g8377e3e65b ((HEAD) @Now started, workers: 6, process: 5896e0fa252eefa7_306827, host_id: app1, is_primary: true, rpc addr: app1:7962"}

Configuring Custom Reader Groups​

You can configure custom reader groups that will process tasks for specific accounts only. By default, there is a single reader group called default. If an account has no assigned reader group, its tasks are automatically placed into the default group.

Groups are configured in the master host configuration using the READERS_GROUPS parameter, for example:

{
"PROC_WORKFLOW": [
{
"HOST_ID": "app1",
"RPC_HOST": "0.0.0.0",
"RPC_HOST_CLI": "app1",
"RPC_PORT": 7962,
"FORCE_MASTER_ROLE": true,
"READERS_GROUPS": {
"acc2": {
"INCLUDE": "2",
"EXCLUDE": ""
},
"acc1-5": {
"INCLUDE": "1-5",
"EXCLUDE": "2"
}
}
},
{
"HOST_ID": "app2",
"RPC_HOST": "0.0.0.0",
"RPC_HOST_CLI": "app2",
"RPC_PORT": 7962,
"FORCE_MASTER_ROLE": false,
"READERS_GROUP": "acc2"
}
]
}

The INCLUDE parameter specifies the accounts to process, and EXCLUDE specifies accounts to exclude.

info

Group names must be unique. Accounts must not appear in multiple groups. For example, if you don’t exclude account 2 from the acc1-5 group in the example, the service will fail to start. After you make the changes, each process will handle tasks only for the accounts specified in its respective group. If no group parameters are set for a slave host, it will process tasks from the default group.

Last updated on Jun 30, 2025
Previous
Pixel and push domain configuration
Next
System notifications configuration
  • Deployment Scheme
  • MongoDB Replica Set Configuration
  • ClickHouse Cluster Setup
  • SSDB Replication Configuration
  • Configuring the RabbitMQ Cluster
  • Kvrocks Replication Configuration
  • Altcraft Platform Cluster Setup
    • Configuring Custom Reader Groups
© 2015 - 2025 Altcraft, LLC. All rights reserved.