• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

AoEtarget.inH A D03-May-20227 KiB246181

AudibleAlarmH A D03-May-20224.1 KiB189134

CTDB.inH A D03-May-202234.3 KiB995697

ClusterMonH A D03-May-20227.3 KiB272190

DelayH A D03-May-20224.9 KiB230170

DummyH A D03-May-20225.4 KiB187121

EvmsSCCH A D03-May-20226.7 KiB223138

EvmsdH A D03-May-20224.1 KiB162102

FilesystemH A D03-May-202227.8 KiB1,032726

ICPH A D03-May-20226.3 KiB305200

IPaddrH A D03-May-202223.1 KiB913665

IPaddr2H A D03-May-202237.2 KiB1,309979

IPsrcaddrH A D03-May-202215 KiB580359

IPv6addr.cH A D03-May-202222.5 KiB883626

IPv6addr_utils.cH A D03-May-20224.3 KiB14899

LVMH A D03-May-202212.1 KiB471297

LVM-activateH A D03-May-202225.6 KiB949622

LinuxSCSIH A D03-May-20228.1 KiB323220

MailToH A D03-May-20224.2 KiB200129

Makefile.amH A D03-May-20225 KiB241198

ManageRAID.inH A D03-May-20228.5 KiB392232

ManageVE.inH A D03-May-20227 KiB321168

NodeUtilizationH A D03-May-20228.8 KiB238169

Pure-FTPdH A D03-May-20226.5 KiB261173

READMEH A D03-May-20221.3 KiB4528

README.galeraH A D03-May-20225.5 KiB149106

README.mariadb.mdH A D03-May-20224.1 KiB157113

Raid1H A D03-May-202214.6 KiB587461

RouteH A D03-May-202211 KiB354254

SAPDatabaseH A D03-May-202217.4 KiB402294

SAPInstanceH A D03-May-202241.7 KiB1,077782

SendArpH A D03-May-20227.1 KiB278180

ServeRAIDH A D03-May-202210.5 KiB428272

SphinxSearchDaemonH A D03-May-20226.1 KiB231167

Squid.inH A D03-May-202211.8 KiB473356

StatefulH A D03-May-20224.6 KiB195130

SysInfo.inH A D03-May-20229.5 KiB373287

VIPAripH A D03-May-20227 KiB315253

VirtualDomainH A D03-May-202238.5 KiB1,110849

WASH A D03-May-202212.3 KiB573364

WAS6H A D03-May-202212.4 KiB547350

WinPopupH A D03-May-20225.1 KiB238163

XenH A D03-May-202218.8 KiB654512

XinetdH A D03-May-20225.9 KiB257196

ZFSH A D03-May-20225.9 KiB204132

aliyun-vpc-move-ipH A D03-May-202210.3 KiB379293

anythingH A D03-May-202210.6 KiB345270

apacheH A D03-May-202218.8 KiB745548

apache-conf.shH A D03-May-20224 KiB197134

asteriskH A D03-May-202214.5 KiB486337

aws-vpc-move-ipH A D03-May-202214.3 KiB468351

aws-vpc-route53.inH A D03-May-202213.9 KiB450316

awseipH A D03-May-20228.3 KiB288201

awsvipH A D03-May-20227.1 KiB252170

azure-events.inH A D03-May-202229.3 KiB847664

azure-lbH A D03-May-20225.7 KiB230171

clvm.inH A D03-May-202211.5 KiB458322

conntrackd.inH A D03-May-20229.5 KiB336244

cryptH A D03-May-202210 KiB343250

db2H A D03-May-202224.8 KiB918577

dhcpdH A D03-May-202218.7 KiB559391

dnsupdate.inH A D03-May-20227.7 KiB298234

dockerH A D03-May-202216.9 KiB606444

docker-composeH A D03-May-20227.2 KiB295217

dovecotH A D03-May-20228 KiB339227

dummypy.inH A D03-May-20225.4 KiB16595

eDir88.inH A D03-May-202216.2 KiB477319

ethmonitorH A D03-May-202218.3 KiB577397

exportfsH A D03-May-202212.6 KiB486407

findif.shH A D03-May-20226.9 KiB261226

fio.inH A D03-May-20224.4 KiB179124

galera.inH A D03-May-202235.4 KiB1,095786

garbdH A D03-May-202212.8 KiB437299

gcp-ilbH A D03-May-20229.6 KiB344248

gcp-pd-move.inH A D03-May-202211.6 KiB383293

gcp-vpc-move-ip.inH A D03-May-202212.5 KiB375261

gcp-vpc-move-route.inH A D03-May-202215.7 KiB491369

gcp-vpc-move-vip.inH A D03-May-202215.6 KiB467369

http-mon.shH A D03-May-20223.3 KiB141102

iSCSILogicalUnit.inH A D03-May-202227.6 KiB786592

iSCSITarget.inH A D03-May-202223 KiB705523

idsH A D03-May-202223.3 KiB752456

iface-bridgeH A D03-May-202225.7 KiB844617

iface-vlanH A D03-May-202213.2 KiB476341

ipsecH A D03-May-20225.9 KiB197136

iscsiH A D03-May-202213.3 KiB517402

jbossH A D03-May-202221.1 KiB673516

jira.inH A D03-May-20227.8 KiB292186

kamailio.inH A D03-May-202224.4 KiB742536

lvm-clvm.shH A D03-May-20221.8 KiB8751

lvm-plain.shH A D03-May-20221.1 KiB6327

lvm-tag.shH A D03-May-20224.5 KiB206118

lvmlockdH A D03-May-20229.9 KiB388268

lxc.inH A D03-May-202211.7 KiB359242

lxd-info.inH A D03-May-20224.1 KiB157102

machine-info.inH A D03-May-20224.3 KiB158103

mariadb.inH A D03-May-202233.3 KiB1,059723

mdraidH A D03-May-202216.4 KiB585421

metadata.rngH A D03-May-20222.1 KiB9282

minioH A D03-May-20227.8 KiB290205

mpathpersist.inH A D03-May-202222.2 KiB683514

mysqlH A D03-May-202236 KiB1,079728

mysql-common.shH A D03-May-202210.3 KiB330264

mysql-proxyH A D03-May-202224.7 KiB742494

nagiosH A D03-May-20227.1 KiB247179

namedH A D03-May-202214.1 KiB515377

nfsnotify.inH A D03-May-20229.2 KiB324219

nfsserverH A D03-May-202220.5 KiB908710

nfsserver-redhat.shH A D03-May-20225.1 KiB172139

nginxH A D03-May-202222.2 KiB957751

nvmet-namespaceH A D03-May-20225.9 KiB206145

nvmet-portH A D03-May-20226.5 KiB239171

nvmet-subsystemH A D03-May-20225.6 KiB189130

ocf-binaries.inH A D03-May-20221.7 KiB7669

ocf-directories.inH A D03-May-2022732 2321

ocf-distroH A D03-May-20224.5 KiB201160

ocf-rarunH A D03-May-20223.5 KiB147144

ocf-returncodesH A D03-May-20221.8 KiB5653

ocf-shellfuncs.inH A D03-May-202225.2 KiB1,048940

ocf.pyH A D03-May-202212.7 KiB483351

openstack-cinder-volumeH A D03-May-20228.2 KiB314193

openstack-floating-ipH A D03-May-20227.3 KiB277182

openstack-info.inH A D03-May-20228.5 KiB291209

openstack-virtual-ipH A D03-May-20227.2 KiB281186

ora-common.shH A D03-May-20222.3 KiB9157

oraasmH A D03-May-20224.3 KiB184133

oracleH A D03-May-202220.3 KiB790630

oralsnrH A D03-May-20226.9 KiB294205

ovsmonitorH A D03-May-202214.2 KiB469312

pgagentH A D03-May-20224.5 KiB140114

pgsqlH A D03-May-202269.5 KiB2,2541,806

pingdH A D03-May-20228.3 KiB298218

podmanH A D03-May-202217.7 KiB620444

portblockH A D03-May-202215.6 KiB583457

postfixH A D03-May-202211.5 KiB423285

poundH A D03-May-20228.8 KiB344255

proftpdH A D03-May-20227.9 KiB312223

ra-api-1.dtdH A D03-May-20221.1 KiB4131

rabbitmq-cluster.inH A D03-May-202218.3 KiB629443

redis.inH A D03-May-202224.3 KiB784612

rktH A D03-May-202212.2 KiB476356

rsyncdH A D03-May-20226.3 KiB281204

rsyslog.inH A D03-May-20226.8 KiB265192

sapdb-nosha.shH A D03-May-202216.8 KiB745555

sapdb.shH A D03-May-20229.4 KiB368257

scsi2reservationH A D03-May-20223.9 KiB177143

send_ua.cH A D03-May-20223.3 KiB13488

sfexH A D03-May-20228.5 KiB312212

sg_persist.inH A D03-May-202222.3 KiB696524

shellfuncs.inH A D03-May-20222 KiB9781

slapd.inH A D03-May-202215.1 KiB595468

smb-share.inH A D03-May-202220 KiB495346

storage-mon.inH A D03-May-20227.7 KiB264170

sybaseASE.inH A D03-May-202229.2 KiB906577

symlinkH A D03-May-20228.4 KiB246164

syslog-ng.inH A D03-May-202215 KiB468286

tomcatH A D03-May-202225.6 KiB817639

varnishH A D03-May-202215.1 KiB505390

vdo-volH A D03-May-20225.5 KiB235165

vmwareH A D03-May-202210.4 KiB394242

vsftpd.inH A D03-May-20225.1 KiB260186

zabbixserverH A D03-May-20227.6 KiB320189

README

1The OCF RA shared code directory
2
3If an RA is too big to be comfortably maintained, split it into
4several source files. Obviosuly, if two or more RAs share some
5code, move that code out to a file which can be shared.
6
7These files will be installed in $OCF_ROOT/lib/heartbeat with
8permissions 644.
9
10Naming practice
11
12Use names such as <RA>.sh or <RA>-check.sh or anything-else.sh
13where "anything-else" should be related to both the RA and the
14code it contains. By adding extension (.sh) it is going to be
15easier to notice that these files are not complete resource
16agents.
17
18For instance, oracle and oralsnr RA can both use code in
19ora-common.sh.
20
21Of course, if the RA is implemented in another programming
22language, use the appropriate extension.
23
24RA tracing
25
26RA tracing may be turned on by setting OCF_TRACE_RA. The trace
27output will be saved to OCF_TRACE_FILE, if set, or by default to
28
29  $HA_VARLIB/trace_ra/<type>/<id>.<action>.<timestamp>
30
31e.g. $HA_VARLIB/trace_ra/oracle/db.start.2012-11-27.08:37:08
32
33HA_VARLIB is typically set to /var/lib/heartbeat.
34
35OCF_TRACE_FILE can be set to a path or file descriptor:
36
37- FD (small integer [3-9]) in that case it is up to the callers
38  to capture output; the FD _must_ be open for writing
39
40- absolute path
41
42NB: FD 9 may be used for tracing with bash >= v4 in case
43OCF_TRACE_FILE is set to a path.
44
45

README.galera

1Notes regarding the Galera resource agent
2---
3
4In the resource agent, the action of bootstrapping a Galera cluster is
5implemented into a series of small steps, by using:
6
7  * Two CIB attributes `last-committed` and `bootstrap` to elect a
8    bootstrap node that will restart the cluster.
9
10  * One CIB attribute `sync-needed` that will identify that joining
11    nodes are in the process of synchronizing their local database
12    via SST.
13
14  * A Master/Slave pacemaker resource which helps splitting the boot
15    into steps, up to a point where a galera node is available.
16
17  * the recurring monitor action to coordinate switch from one
18    state to another.
19
20How boot works
21====
22
23There are two things to know to understand how the resource agent
24restart a Galera cluster.
25
26### Bootstrap the cluster with the right node
27
28When synced, the nodes of a galera cluster have in common a last seqno,
29which identifies the last transaction considered successful by a
30majority of nodes in the cluster (think quorum).
31
32To restart a cluster, the resource agent must ensure that it will
33bootstrap the cluster from an node which is up-to-date, i.e which has
34the highest seqno of all nodes.
35
36As a result, if the resource agent cannot retrieve the seqno on all
37nodes, it won't be able to safely identify a bootstrap node, and
38will simply refuse to start the galera cluster.
39
40### synchronizing nodes can be a long operation
41
42Starting a bootstrap node is relatively fast, so it's performed
43during the "promote" operation, which is a one-off, time-bounded
44operation.
45
46Subsequent nodes will need to synchronize via SST, which consists
47in "pushing" an entire Galera DB from one node to another.
48
49There is no perfect time-out, as time spent during synchronization
50depends on the size of the DB. Thus, joiner nodes are started during
51the "monitor" operation, which is a recurring operation that can
52better track the progress of the SST.
53
54
55State flow
56====
57
58General idea for starting Galera:
59
60  * Before starting the Galera cluster each node needs to go in Slave
61    state so that the agent records its last seqno into the CIB.
62    __ This uses attribute last-committed __
63
64  * When all node went in Slave, the agent can safely determine the
65    last seqno and elect a bootstrap node (`detect_first_master()`).
66    __ This uses attribute bootstrap __
67
68  * The agent then sets the score of the elected bootstrap node to
69    Master so that pacemaker promote it and start the first Galera
70    server.
71
72  * Once the first Master is running, the agent can start joiner
73    nodes during the "monitor" operation, and starts monitoring
74    their SST sync.
75    __ This uses attribute sync-needed __
76
77  * Only when SST is over on joiner nodes, the agent promotes them
78    to Master. At this point, the entire Galera cluster is up.
79
80
81Attribute usage and liveness
82====
83
84Here is how attributes are created on a per-node basis. If you
85modify the resource agent make sure those properties still hold.
86
87### last-committed
88
89It is just a temporary hint for the resource agent to help
90elect a bootstrap node. Once the bootstrap attribute is set on one
91of the nodes, we can get rid of last-committed.
92
93 - Used   : during Slave state to compare seqno
94 - Created: before entering Slave state:
95              . at startup in `galera_start()`
96              . or when a Galera node is stopped in `galera_demote()`
97 - Deleted: just before node starts in `galera_start_local_node()`;
98            cleaned-up during `galera_demote()` and `galera_stop()`
99
100We delete last-committed before starting Galera, to avoid race
101conditions that could arise due to discrepancies between the CIB and
102Galera.
103
104### bootstrap
105
106Attribute set on the node that is elected to bootstrap Galera.
107
108- Used   : during promotion in `galera_start_local_node()`
109- Created: at startup once all nodes have `last-committed`;
110           or during monitor if all nodes have failed
111- Deleted: in `galera_start_local_node()`, just after the bootstrap
112           node started and is ready;
113           cleaned-up during `galera_demote()` and `galera_stop()`
114
115There cannot be more than one bootstrap node at any time, otherwise
116the Galera cluster would stop replicating properly.
117
118### sync-needed
119
120While this attribute is set on a node, the Galera node is in JOIN
121state, i.e. SST is in progress and the node cannot serve queries.
122
123The resource agent relies on the underlying SST method to monitor
124the progress of the SST. For instance, with `wsrep_sst_rsync`,
125timeout would be reported by rsync, the Galera node would go in
126Non-primary state, which would make `galera_monitor()` fail.
127
128- Used   : during recurring slave monitor in `check_sync_status()`
129- Created: in `galera_start_local_node()`, just after the joiner
130           node started and entered the Galera cluster
131- Deleted: during recurring slave monitor in `check_sync_status()`
132           as soon as the Galera code reports to be SYNC-ed.
133
134### no-grastate
135
136If a galera node was unexpectedly killed in a middle of a replication,
137InnoDB can retain the equivalent of a XA transaction in prepared state
138in its redo log. If so, mysqld cannot recover state (nor last seqno)
139automatically, and special recovery heuristic has to be used to
140unblock the node.
141
142This transient attribute is used to keep track of forced recoveries to
143prevent bootstrapping a cluster from a recovered node when possible.
144
145- Used   : during `detect_first_master()` to elect the bootstrap node
146- Created: in `detect_last_commit()` if the node has a pending XA
147           transaction to recover in the redo log
148- Deleted: when a node is promoted to Master.
149

README.mariadb.md

1Setting up the MariaDB resource agent
2=====================================
3
4This resource agent requires corosync version >= 2 and mariadb version > 10.2 .
5
6Before embarking on this quest one should read the MariaDB pages on replication
7and global transaction IDs, GTID. This will greatly help in understanding what
8is going on and why.
9
10Replication: https://mariadb.com/kb/en/mariadb/setting-up-replication/
11GTID: https://mariadb.com/kb/en/mariadb/gtid/
12semi-sync: https://mariadb.com/kb/en/mariadb/semisynchronous-replication/
13
14Some reading on failures under enhanced semi-sync can be found here:
15https://jira.mariadb.org/browse/MDEV-162
16
17Part 1: MariaDB Setup
18---------------------
19
20It is best to initialize your MariaDB and do a failover before trying to use
21Pacemaker to manage MariaDB. This will both verify the MariaDB configuration
22and help you understand what is going on.
23
24###Configuration Files
25
26In your MariaDB config file for the server on node 1, place the following
27entry (replacing my_database and other names as needed):
28```
29[mariadb]
30log-bin
31server_id=1
32log-basename=master
33binlog_do_db=my_database
34```
35
36Then for each other node create the same entry, but increment the server_id.
37
38###Replication User
39
40Now create the replication user (be sure to change the password!):
41```
42GRANT ALL PRIVILEGES ON *.* TO 'slave_user'@'%' IDENTIFIED BY 'password';
43GRANT ALL PRIVILEGES ON *.* TO 'slave_user'@'localhost' IDENTIFIED BY 'password';
44```
45
46The second entry may not be necessary, but simplified other steps. Change
47user name and password as needed.
48
49
50###Intialize from a database backup
51
52Initialize all nodes from an existing backup, or create a backup from the
53first node if needed:
54
55On the current database:
56```
57mysqldump -u root --master-data --databases my_database1 my_database2 > backup.sql
58```
59
60At the top of this file is a commented out line:
61SET GLOBAL gtid_slave_pos='XXXX...'
62
63uncomment his line.
64
65On all new nodes:
66```
67mysqldump -u root < backup.sql
68```
69
70###Initialize replication
71
72Choose a node as master, in this example node1.
73
74On all slaves, execute:
75```
76RESET MASTER;
77
78CHANGE MASTER TO master_host="node1", master_port=3306, \
79       master_user="slave_user", master_password="password", \
80       master_use_gtid=current_pos;
81
82SET GLOBAL rpl_semi_sync_master_enabled='ON', rpl_semi_sync_slave_enabled='ON';
83
84START SLAVE;
85
86SHOW SLAVE STATUS\G
87```
88
89In an ideal world this will show that replication is now fully working.
90
91Once replication is working, verify the configuration by doing some updates
92and verifying that they are replicated.
93
94Now try changing the master. On each slave perform:
95```
96STOP SLAVE
97```
98
99Choose a new master, node2 in our example. On all slave nodes execute:
100```
101CHANGE MASTER TO  master_host="node2", master_port=3306, \
102       master_user="slave_user", master_password="password", \
103       master_use_gtid=current_pos;
104
105START SLAVE;
106```
107
108And again, check that replication is working and changes are synchronized.
109
110
111Part 2: Pacemaker Setup
112-----------------------
113
114This is pretty straightforward. Example is using pcs.
115
116```
117# Dump the cib
118pcs cluster cib mariadb_cfg
119
120# Create the mariadb_server resource
121pcs -f mariadb_cfg resource create mariadb_server mariadb \
122   binary="/usr/sbin/mysqld" \
123   replication_user="slave_user" \
124   replication_passwd="password" \
125   node_list="node1 node2 node3" \
126   op start timeout=120 interval=0 \
127   op stop timeout=120 interval=0 \
128   op promote timeout=120 interval=0 \
129   op demote timeout=120 interval=0 \
130   op monitor role=Master timeout=30 interval=10 \
131   op monitor role=Slave timeout=30 interval=20 \
132   op notify  timeout="60s" interval="0s"
133
134# Create the master slave resource
135pcs -f mariadb_cfg resource master msMariadb mariadb_server \
136    master-max=1 master-node-max=1 clone-max=3 clone-node-max=1 notify=true
137
138# Avoid running this on some nodes, only if needed
139pcs -f mariadb_cfg constraint location msMariadb avoids \
140    node4=INFINITY node5=INFINITY
141
142# Push the cib
143pcs cluster cib-push mariadb_cfg
144```
145
146You should now have a running MariaDB cluster:
147```
148pcs status
149
150...
151 Master/Slave Set: msMariadb [mariadb_server]
152      Masters: [ node1 ]
153      Slaves: [ node2 node3 ]
154...
155```
156
157