|
Name |
|
Date |
Size |
#Lines |
LOC |
| .. | | 03-May-2022 | - |
| AoEtarget.in | H A D | 03-May-2022 | 7 KiB | 246 | 181 |
| AudibleAlarm | H A D | 03-May-2022 | 4.1 KiB | 189 | 134 |
| CTDB.in | H A D | 03-May-2022 | 34.3 KiB | 995 | 697 |
| ClusterMon | H A D | 03-May-2022 | 7.3 KiB | 272 | 190 |
| Delay | H A D | 03-May-2022 | 4.9 KiB | 230 | 170 |
| Dummy | H A D | 03-May-2022 | 5.4 KiB | 187 | 121 |
| EvmsSCC | H A D | 03-May-2022 | 6.7 KiB | 223 | 138 |
| Evmsd | H A D | 03-May-2022 | 4.1 KiB | 162 | 102 |
| Filesystem | H A D | 03-May-2022 | 27.8 KiB | 1,032 | 726 |
| ICP | H A D | 03-May-2022 | 6.3 KiB | 305 | 200 |
| IPaddr | H A D | 03-May-2022 | 23.1 KiB | 913 | 665 |
| IPaddr2 | H A D | 03-May-2022 | 37.2 KiB | 1,309 | 979 |
| IPsrcaddr | H A D | 03-May-2022 | 15 KiB | 580 | 359 |
| IPv6addr.c | H A D | 03-May-2022 | 22.5 KiB | 883 | 626 |
| IPv6addr_utils.c | H A D | 03-May-2022 | 4.3 KiB | 148 | 99 |
| LVM | H A D | 03-May-2022 | 12.1 KiB | 471 | 297 |
| LVM-activate | H A D | 03-May-2022 | 25.6 KiB | 949 | 622 |
| LinuxSCSI | H A D | 03-May-2022 | 8.1 KiB | 323 | 220 |
| MailTo | H A D | 03-May-2022 | 4.2 KiB | 200 | 129 |
| Makefile.am | H A D | 03-May-2022 | 5 KiB | 241 | 198 |
| ManageRAID.in | H A D | 03-May-2022 | 8.5 KiB | 392 | 232 |
| ManageVE.in | H A D | 03-May-2022 | 7 KiB | 321 | 168 |
| NodeUtilization | H A D | 03-May-2022 | 8.8 KiB | 238 | 169 |
| Pure-FTPd | H A D | 03-May-2022 | 6.5 KiB | 261 | 173 |
| README | H A D | 03-May-2022 | 1.3 KiB | 45 | 28 |
| README.galera | H A D | 03-May-2022 | 5.5 KiB | 149 | 106 |
| README.mariadb.md | H A D | 03-May-2022 | 4.1 KiB | 157 | 113 |
| Raid1 | H A D | 03-May-2022 | 14.6 KiB | 587 | 461 |
| Route | H A D | 03-May-2022 | 11 KiB | 354 | 254 |
| SAPDatabase | H A D | 03-May-2022 | 17.4 KiB | 402 | 294 |
| SAPInstance | H A D | 03-May-2022 | 41.7 KiB | 1,077 | 782 |
| SendArp | H A D | 03-May-2022 | 7.1 KiB | 278 | 180 |
| ServeRAID | H A D | 03-May-2022 | 10.5 KiB | 428 | 272 |
| SphinxSearchDaemon | H A D | 03-May-2022 | 6.1 KiB | 231 | 167 |
| Squid.in | H A D | 03-May-2022 | 11.8 KiB | 473 | 356 |
| Stateful | H A D | 03-May-2022 | 4.6 KiB | 195 | 130 |
| SysInfo.in | H A D | 03-May-2022 | 9.5 KiB | 373 | 287 |
| VIPArip | H A D | 03-May-2022 | 7 KiB | 315 | 253 |
| VirtualDomain | H A D | 03-May-2022 | 38.5 KiB | 1,110 | 849 |
| WAS | H A D | 03-May-2022 | 12.3 KiB | 573 | 364 |
| WAS6 | H A D | 03-May-2022 | 12.4 KiB | 547 | 350 |
| WinPopup | H A D | 03-May-2022 | 5.1 KiB | 238 | 163 |
| Xen | H A D | 03-May-2022 | 18.8 KiB | 654 | 512 |
| Xinetd | H A D | 03-May-2022 | 5.9 KiB | 257 | 196 |
| ZFS | H A D | 03-May-2022 | 5.9 KiB | 204 | 132 |
| aliyun-vpc-move-ip | H A D | 03-May-2022 | 10.3 KiB | 379 | 293 |
| anything | H A D | 03-May-2022 | 10.6 KiB | 345 | 270 |
| apache | H A D | 03-May-2022 | 18.8 KiB | 745 | 548 |
| apache-conf.sh | H A D | 03-May-2022 | 4 KiB | 197 | 134 |
| asterisk | H A D | 03-May-2022 | 14.5 KiB | 486 | 337 |
| aws-vpc-move-ip | H A D | 03-May-2022 | 14.3 KiB | 468 | 351 |
| aws-vpc-route53.in | H A D | 03-May-2022 | 13.9 KiB | 450 | 316 |
| awseip | H A D | 03-May-2022 | 8.3 KiB | 288 | 201 |
| awsvip | H A D | 03-May-2022 | 7.1 KiB | 252 | 170 |
| azure-events.in | H A D | 03-May-2022 | 29.3 KiB | 847 | 664 |
| azure-lb | H A D | 03-May-2022 | 5.7 KiB | 230 | 171 |
| clvm.in | H A D | 03-May-2022 | 11.5 KiB | 458 | 322 |
| conntrackd.in | H A D | 03-May-2022 | 9.5 KiB | 336 | 244 |
| crypt | H A D | 03-May-2022 | 10 KiB | 343 | 250 |
| db2 | H A D | 03-May-2022 | 24.8 KiB | 918 | 577 |
| dhcpd | H A D | 03-May-2022 | 18.7 KiB | 559 | 391 |
| dnsupdate.in | H A D | 03-May-2022 | 7.7 KiB | 298 | 234 |
| docker | H A D | 03-May-2022 | 16.9 KiB | 606 | 444 |
| docker-compose | H A D | 03-May-2022 | 7.2 KiB | 295 | 217 |
| dovecot | H A D | 03-May-2022 | 8 KiB | 339 | 227 |
| dummypy.in | H A D | 03-May-2022 | 5.4 KiB | 165 | 95 |
| eDir88.in | H A D | 03-May-2022 | 16.2 KiB | 477 | 319 |
| ethmonitor | H A D | 03-May-2022 | 18.3 KiB | 577 | 397 |
| exportfs | H A D | 03-May-2022 | 12.6 KiB | 486 | 407 |
| findif.sh | H A D | 03-May-2022 | 6.9 KiB | 261 | 226 |
| fio.in | H A D | 03-May-2022 | 4.4 KiB | 179 | 124 |
| galera.in | H A D | 03-May-2022 | 35.4 KiB | 1,095 | 786 |
| garbd | H A D | 03-May-2022 | 12.8 KiB | 437 | 299 |
| gcp-ilb | H A D | 03-May-2022 | 9.6 KiB | 344 | 248 |
| gcp-pd-move.in | H A D | 03-May-2022 | 11.6 KiB | 383 | 293 |
| gcp-vpc-move-ip.in | H A D | 03-May-2022 | 12.5 KiB | 375 | 261 |
| gcp-vpc-move-route.in | H A D | 03-May-2022 | 15.7 KiB | 491 | 369 |
| gcp-vpc-move-vip.in | H A D | 03-May-2022 | 15.6 KiB | 467 | 369 |
| http-mon.sh | H A D | 03-May-2022 | 3.3 KiB | 141 | 102 |
| iSCSILogicalUnit.in | H A D | 03-May-2022 | 27.6 KiB | 786 | 592 |
| iSCSITarget.in | H A D | 03-May-2022 | 23 KiB | 705 | 523 |
| ids | H A D | 03-May-2022 | 23.3 KiB | 752 | 456 |
| iface-bridge | H A D | 03-May-2022 | 25.7 KiB | 844 | 617 |
| iface-vlan | H A D | 03-May-2022 | 13.2 KiB | 476 | 341 |
| ipsec | H A D | 03-May-2022 | 5.9 KiB | 197 | 136 |
| iscsi | H A D | 03-May-2022 | 13.3 KiB | 517 | 402 |
| jboss | H A D | 03-May-2022 | 21.1 KiB | 673 | 516 |
| jira.in | H A D | 03-May-2022 | 7.8 KiB | 292 | 186 |
| kamailio.in | H A D | 03-May-2022 | 24.4 KiB | 742 | 536 |
| lvm-clvm.sh | H A D | 03-May-2022 | 1.8 KiB | 87 | 51 |
| lvm-plain.sh | H A D | 03-May-2022 | 1.1 KiB | 63 | 27 |
| lvm-tag.sh | H A D | 03-May-2022 | 4.5 KiB | 206 | 118 |
| lvmlockd | H A D | 03-May-2022 | 9.9 KiB | 388 | 268 |
| lxc.in | H A D | 03-May-2022 | 11.7 KiB | 359 | 242 |
| lxd-info.in | H A D | 03-May-2022 | 4.1 KiB | 157 | 102 |
| machine-info.in | H A D | 03-May-2022 | 4.3 KiB | 158 | 103 |
| mariadb.in | H A D | 03-May-2022 | 33.3 KiB | 1,059 | 723 |
| mdraid | H A D | 03-May-2022 | 16.4 KiB | 585 | 421 |
| metadata.rng | H A D | 03-May-2022 | 2.1 KiB | 92 | 82 |
| minio | H A D | 03-May-2022 | 7.8 KiB | 290 | 205 |
| mpathpersist.in | H A D | 03-May-2022 | 22.2 KiB | 683 | 514 |
| mysql | H A D | 03-May-2022 | 36 KiB | 1,079 | 728 |
| mysql-common.sh | H A D | 03-May-2022 | 10.3 KiB | 330 | 264 |
| mysql-proxy | H A D | 03-May-2022 | 24.7 KiB | 742 | 494 |
| nagios | H A D | 03-May-2022 | 7.1 KiB | 247 | 179 |
| named | H A D | 03-May-2022 | 14.1 KiB | 515 | 377 |
| nfsnotify.in | H A D | 03-May-2022 | 9.2 KiB | 324 | 219 |
| nfsserver | H A D | 03-May-2022 | 20.5 KiB | 908 | 710 |
| nfsserver-redhat.sh | H A D | 03-May-2022 | 5.1 KiB | 172 | 139 |
| nginx | H A D | 03-May-2022 | 22.2 KiB | 957 | 751 |
| nvmet-namespace | H A D | 03-May-2022 | 5.9 KiB | 206 | 145 |
| nvmet-port | H A D | 03-May-2022 | 6.5 KiB | 239 | 171 |
| nvmet-subsystem | H A D | 03-May-2022 | 5.6 KiB | 189 | 130 |
| ocf-binaries.in | H A D | 03-May-2022 | 1.7 KiB | 76 | 69 |
| ocf-directories.in | H A D | 03-May-2022 | 732 | 23 | 21 |
| ocf-distro | H A D | 03-May-2022 | 4.5 KiB | 201 | 160 |
| ocf-rarun | H A D | 03-May-2022 | 3.5 KiB | 147 | 144 |
| ocf-returncodes | H A D | 03-May-2022 | 1.8 KiB | 56 | 53 |
| ocf-shellfuncs.in | H A D | 03-May-2022 | 25.2 KiB | 1,048 | 940 |
| ocf.py | H A D | 03-May-2022 | 12.7 KiB | 483 | 351 |
| openstack-cinder-volume | H A D | 03-May-2022 | 8.2 KiB | 314 | 193 |
| openstack-floating-ip | H A D | 03-May-2022 | 7.3 KiB | 277 | 182 |
| openstack-info.in | H A D | 03-May-2022 | 8.5 KiB | 291 | 209 |
| openstack-virtual-ip | H A D | 03-May-2022 | 7.2 KiB | 281 | 186 |
| ora-common.sh | H A D | 03-May-2022 | 2.3 KiB | 91 | 57 |
| oraasm | H A D | 03-May-2022 | 4.3 KiB | 184 | 133 |
| oracle | H A D | 03-May-2022 | 20.3 KiB | 790 | 630 |
| oralsnr | H A D | 03-May-2022 | 6.9 KiB | 294 | 205 |
| ovsmonitor | H A D | 03-May-2022 | 14.2 KiB | 469 | 312 |
| pgagent | H A D | 03-May-2022 | 4.5 KiB | 140 | 114 |
| pgsql | H A D | 03-May-2022 | 69.5 KiB | 2,254 | 1,806 |
| pingd | H A D | 03-May-2022 | 8.3 KiB | 298 | 218 |
| podman | H A D | 03-May-2022 | 17.7 KiB | 620 | 444 |
| portblock | H A D | 03-May-2022 | 15.6 KiB | 583 | 457 |
| postfix | H A D | 03-May-2022 | 11.5 KiB | 423 | 285 |
| pound | H A D | 03-May-2022 | 8.8 KiB | 344 | 255 |
| proftpd | H A D | 03-May-2022 | 7.9 KiB | 312 | 223 |
| ra-api-1.dtd | H A D | 03-May-2022 | 1.1 KiB | 41 | 31 |
| rabbitmq-cluster.in | H A D | 03-May-2022 | 18.3 KiB | 629 | 443 |
| redis.in | H A D | 03-May-2022 | 24.3 KiB | 784 | 612 |
| rkt | H A D | 03-May-2022 | 12.2 KiB | 476 | 356 |
| rsyncd | H A D | 03-May-2022 | 6.3 KiB | 281 | 204 |
| rsyslog.in | H A D | 03-May-2022 | 6.8 KiB | 265 | 192 |
| sapdb-nosha.sh | H A D | 03-May-2022 | 16.8 KiB | 745 | 555 |
| sapdb.sh | H A D | 03-May-2022 | 9.4 KiB | 368 | 257 |
| scsi2reservation | H A D | 03-May-2022 | 3.9 KiB | 177 | 143 |
| send_ua.c | H A D | 03-May-2022 | 3.3 KiB | 134 | 88 |
| sfex | H A D | 03-May-2022 | 8.5 KiB | 312 | 212 |
| sg_persist.in | H A D | 03-May-2022 | 22.3 KiB | 696 | 524 |
| shellfuncs.in | H A D | 03-May-2022 | 2 KiB | 97 | 81 |
| slapd.in | H A D | 03-May-2022 | 15.1 KiB | 595 | 468 |
| smb-share.in | H A D | 03-May-2022 | 20 KiB | 495 | 346 |
| storage-mon.in | H A D | 03-May-2022 | 7.7 KiB | 264 | 170 |
| sybaseASE.in | H A D | 03-May-2022 | 29.2 KiB | 906 | 577 |
| symlink | H A D | 03-May-2022 | 8.4 KiB | 246 | 164 |
| syslog-ng.in | H A D | 03-May-2022 | 15 KiB | 468 | 286 |
| tomcat | H A D | 03-May-2022 | 25.6 KiB | 817 | 639 |
| varnish | H A D | 03-May-2022 | 15.1 KiB | 505 | 390 |
| vdo-vol | H A D | 03-May-2022 | 5.5 KiB | 235 | 165 |
| vmware | H A D | 03-May-2022 | 10.4 KiB | 394 | 242 |
| vsftpd.in | H A D | 03-May-2022 | 5.1 KiB | 260 | 186 |
| zabbixserver | H A D | 03-May-2022 | 7.6 KiB | 320 | 189 |
README
1The OCF RA shared code directory
2
3If an RA is too big to be comfortably maintained, split it into
4several source files. Obviosuly, if two or more RAs share some
5code, move that code out to a file which can be shared.
6
7These files will be installed in $OCF_ROOT/lib/heartbeat with
8permissions 644.
9
10Naming practice
11
12Use names such as <RA>.sh or <RA>-check.sh or anything-else.sh
13where "anything-else" should be related to both the RA and the
14code it contains. By adding extension (.sh) it is going to be
15easier to notice that these files are not complete resource
16agents.
17
18For instance, oracle and oralsnr RA can both use code in
19ora-common.sh.
20
21Of course, if the RA is implemented in another programming
22language, use the appropriate extension.
23
24RA tracing
25
26RA tracing may be turned on by setting OCF_TRACE_RA. The trace
27output will be saved to OCF_TRACE_FILE, if set, or by default to
28
29 $HA_VARLIB/trace_ra/<type>/<id>.<action>.<timestamp>
30
31e.g. $HA_VARLIB/trace_ra/oracle/db.start.2012-11-27.08:37:08
32
33HA_VARLIB is typically set to /var/lib/heartbeat.
34
35OCF_TRACE_FILE can be set to a path or file descriptor:
36
37- FD (small integer [3-9]) in that case it is up to the callers
38 to capture output; the FD _must_ be open for writing
39
40- absolute path
41
42NB: FD 9 may be used for tracing with bash >= v4 in case
43OCF_TRACE_FILE is set to a path.
44
45
README.galera
1Notes regarding the Galera resource agent
2---
3
4In the resource agent, the action of bootstrapping a Galera cluster is
5implemented into a series of small steps, by using:
6
7 * Two CIB attributes `last-committed` and `bootstrap` to elect a
8 bootstrap node that will restart the cluster.
9
10 * One CIB attribute `sync-needed` that will identify that joining
11 nodes are in the process of synchronizing their local database
12 via SST.
13
14 * A Master/Slave pacemaker resource which helps splitting the boot
15 into steps, up to a point where a galera node is available.
16
17 * the recurring monitor action to coordinate switch from one
18 state to another.
19
20How boot works
21====
22
23There are two things to know to understand how the resource agent
24restart a Galera cluster.
25
26### Bootstrap the cluster with the right node
27
28When synced, the nodes of a galera cluster have in common a last seqno,
29which identifies the last transaction considered successful by a
30majority of nodes in the cluster (think quorum).
31
32To restart a cluster, the resource agent must ensure that it will
33bootstrap the cluster from an node which is up-to-date, i.e which has
34the highest seqno of all nodes.
35
36As a result, if the resource agent cannot retrieve the seqno on all
37nodes, it won't be able to safely identify a bootstrap node, and
38will simply refuse to start the galera cluster.
39
40### synchronizing nodes can be a long operation
41
42Starting a bootstrap node is relatively fast, so it's performed
43during the "promote" operation, which is a one-off, time-bounded
44operation.
45
46Subsequent nodes will need to synchronize via SST, which consists
47in "pushing" an entire Galera DB from one node to another.
48
49There is no perfect time-out, as time spent during synchronization
50depends on the size of the DB. Thus, joiner nodes are started during
51the "monitor" operation, which is a recurring operation that can
52better track the progress of the SST.
53
54
55State flow
56====
57
58General idea for starting Galera:
59
60 * Before starting the Galera cluster each node needs to go in Slave
61 state so that the agent records its last seqno into the CIB.
62 __ This uses attribute last-committed __
63
64 * When all node went in Slave, the agent can safely determine the
65 last seqno and elect a bootstrap node (`detect_first_master()`).
66 __ This uses attribute bootstrap __
67
68 * The agent then sets the score of the elected bootstrap node to
69 Master so that pacemaker promote it and start the first Galera
70 server.
71
72 * Once the first Master is running, the agent can start joiner
73 nodes during the "monitor" operation, and starts monitoring
74 their SST sync.
75 __ This uses attribute sync-needed __
76
77 * Only when SST is over on joiner nodes, the agent promotes them
78 to Master. At this point, the entire Galera cluster is up.
79
80
81Attribute usage and liveness
82====
83
84Here is how attributes are created on a per-node basis. If you
85modify the resource agent make sure those properties still hold.
86
87### last-committed
88
89It is just a temporary hint for the resource agent to help
90elect a bootstrap node. Once the bootstrap attribute is set on one
91of the nodes, we can get rid of last-committed.
92
93 - Used : during Slave state to compare seqno
94 - Created: before entering Slave state:
95 . at startup in `galera_start()`
96 . or when a Galera node is stopped in `galera_demote()`
97 - Deleted: just before node starts in `galera_start_local_node()`;
98 cleaned-up during `galera_demote()` and `galera_stop()`
99
100We delete last-committed before starting Galera, to avoid race
101conditions that could arise due to discrepancies between the CIB and
102Galera.
103
104### bootstrap
105
106Attribute set on the node that is elected to bootstrap Galera.
107
108- Used : during promotion in `galera_start_local_node()`
109- Created: at startup once all nodes have `last-committed`;
110 or during monitor if all nodes have failed
111- Deleted: in `galera_start_local_node()`, just after the bootstrap
112 node started and is ready;
113 cleaned-up during `galera_demote()` and `galera_stop()`
114
115There cannot be more than one bootstrap node at any time, otherwise
116the Galera cluster would stop replicating properly.
117
118### sync-needed
119
120While this attribute is set on a node, the Galera node is in JOIN
121state, i.e. SST is in progress and the node cannot serve queries.
122
123The resource agent relies on the underlying SST method to monitor
124the progress of the SST. For instance, with `wsrep_sst_rsync`,
125timeout would be reported by rsync, the Galera node would go in
126Non-primary state, which would make `galera_monitor()` fail.
127
128- Used : during recurring slave monitor in `check_sync_status()`
129- Created: in `galera_start_local_node()`, just after the joiner
130 node started and entered the Galera cluster
131- Deleted: during recurring slave monitor in `check_sync_status()`
132 as soon as the Galera code reports to be SYNC-ed.
133
134### no-grastate
135
136If a galera node was unexpectedly killed in a middle of a replication,
137InnoDB can retain the equivalent of a XA transaction in prepared state
138in its redo log. If so, mysqld cannot recover state (nor last seqno)
139automatically, and special recovery heuristic has to be used to
140unblock the node.
141
142This transient attribute is used to keep track of forced recoveries to
143prevent bootstrapping a cluster from a recovered node when possible.
144
145- Used : during `detect_first_master()` to elect the bootstrap node
146- Created: in `detect_last_commit()` if the node has a pending XA
147 transaction to recover in the redo log
148- Deleted: when a node is promoted to Master.
149
README.mariadb.md
1Setting up the MariaDB resource agent
2=====================================
3
4This resource agent requires corosync version >= 2 and mariadb version > 10.2 .
5
6Before embarking on this quest one should read the MariaDB pages on replication
7and global transaction IDs, GTID. This will greatly help in understanding what
8is going on and why.
9
10Replication: https://mariadb.com/kb/en/mariadb/setting-up-replication/
11GTID: https://mariadb.com/kb/en/mariadb/gtid/
12semi-sync: https://mariadb.com/kb/en/mariadb/semisynchronous-replication/
13
14Some reading on failures under enhanced semi-sync can be found here:
15https://jira.mariadb.org/browse/MDEV-162
16
17Part 1: MariaDB Setup
18---------------------
19
20It is best to initialize your MariaDB and do a failover before trying to use
21Pacemaker to manage MariaDB. This will both verify the MariaDB configuration
22and help you understand what is going on.
23
24###Configuration Files
25
26In your MariaDB config file for the server on node 1, place the following
27entry (replacing my_database and other names as needed):
28```
29[mariadb]
30log-bin
31server_id=1
32log-basename=master
33binlog_do_db=my_database
34```
35
36Then for each other node create the same entry, but increment the server_id.
37
38###Replication User
39
40Now create the replication user (be sure to change the password!):
41```
42GRANT ALL PRIVILEGES ON *.* TO 'slave_user'@'%' IDENTIFIED BY 'password';
43GRANT ALL PRIVILEGES ON *.* TO 'slave_user'@'localhost' IDENTIFIED BY 'password';
44```
45
46The second entry may not be necessary, but simplified other steps. Change
47user name and password as needed.
48
49
50###Intialize from a database backup
51
52Initialize all nodes from an existing backup, or create a backup from the
53first node if needed:
54
55On the current database:
56```
57mysqldump -u root --master-data --databases my_database1 my_database2 > backup.sql
58```
59
60At the top of this file is a commented out line:
61SET GLOBAL gtid_slave_pos='XXXX...'
62
63uncomment his line.
64
65On all new nodes:
66```
67mysqldump -u root < backup.sql
68```
69
70###Initialize replication
71
72Choose a node as master, in this example node1.
73
74On all slaves, execute:
75```
76RESET MASTER;
77
78CHANGE MASTER TO master_host="node1", master_port=3306, \
79 master_user="slave_user", master_password="password", \
80 master_use_gtid=current_pos;
81
82SET GLOBAL rpl_semi_sync_master_enabled='ON', rpl_semi_sync_slave_enabled='ON';
83
84START SLAVE;
85
86SHOW SLAVE STATUS\G
87```
88
89In an ideal world this will show that replication is now fully working.
90
91Once replication is working, verify the configuration by doing some updates
92and verifying that they are replicated.
93
94Now try changing the master. On each slave perform:
95```
96STOP SLAVE
97```
98
99Choose a new master, node2 in our example. On all slave nodes execute:
100```
101CHANGE MASTER TO master_host="node2", master_port=3306, \
102 master_user="slave_user", master_password="password", \
103 master_use_gtid=current_pos;
104
105START SLAVE;
106```
107
108And again, check that replication is working and changes are synchronized.
109
110
111Part 2: Pacemaker Setup
112-----------------------
113
114This is pretty straightforward. Example is using pcs.
115
116```
117# Dump the cib
118pcs cluster cib mariadb_cfg
119
120# Create the mariadb_server resource
121pcs -f mariadb_cfg resource create mariadb_server mariadb \
122 binary="/usr/sbin/mysqld" \
123 replication_user="slave_user" \
124 replication_passwd="password" \
125 node_list="node1 node2 node3" \
126 op start timeout=120 interval=0 \
127 op stop timeout=120 interval=0 \
128 op promote timeout=120 interval=0 \
129 op demote timeout=120 interval=0 \
130 op monitor role=Master timeout=30 interval=10 \
131 op monitor role=Slave timeout=30 interval=20 \
132 op notify timeout="60s" interval="0s"
133
134# Create the master slave resource
135pcs -f mariadb_cfg resource master msMariadb mariadb_server \
136 master-max=1 master-node-max=1 clone-max=3 clone-node-max=1 notify=true
137
138# Avoid running this on some nodes, only if needed
139pcs -f mariadb_cfg constraint location msMariadb avoids \
140 node4=INFINITY node5=INFINITY
141
142# Push the cib
143pcs cluster cib-push mariadb_cfg
144```
145
146You should now have a running MariaDB cluster:
147```
148pcs status
149
150...
151 Master/Slave Set: msMariadb [mariadb_server]
152 Masters: [ node1 ]
153 Slaves: [ node2 node3 ]
154...
155```
156
157