xref: /dragonfly/contrib/lvm2/dist/doc/tagging.txt (revision 9348a738)
1Tagging aims
2============
3  1) Ability to attach an unordered list of tags to LVM metadata objects.
4  2) Ability to add or remove tags easily.
5  3) Ability to select LVM objects for processing according to presence/absence
6     of specific tags.
7  4) Ability to control through the config file which VGs/LVs are activated
8     on different machines using names or tags.
9  5) Ability to overlay settings from different config files e.g. override
10     some settings in a global config file locally.
11
12Clarifications
13==============
14  1) Tag character set: A-Za-z0-9_+.-
15     Can't start with hyphen & max length is 128 (NAME_LEN).
16  2) LVM object types that can be tagged:
17       VG, LV, LV segment
18       PV - tags are stored in VG metadata so disappear when PV becomes orphaned
19     Snapshots can't be tagged, but their origin may be.
20  3) A tag can be used in place of any command line LVM object reference that
21     accepts (a) a list of objects; or (b) a single object as long as the
22     tag expands to a single object.  This is not supported everywhere yet.
23     Duplicate arguments in a list after argument expansion may get removed
24     retaining the first copy of each argument.
25  4) Wherever there may be ambiguity of argument type, a tag must be prefixed
26     by '@'; elsewhere an '@' prefix is optional.
27  5) LVM1 objects cannot be tagged, as the disk format doesn't support it.
28  6) Tags can be added or removed with --addtag or --deltag.
29
30Config file Extensions
31======================
32  To define host tags in config file:
33
34  tags {
35  	# Set a tag with the hostname
36	hosttags = 1
37
38	tag1 { }
39
40  	tag2 {
41		# If no exact match, tag is not set.
42		host_list = [ "hostname", "dbase" ]
43	}
44  }
45
46Activation config file example
47==============================
48  activation {
49      volume_list = [ "vg1/lvol0", "@database" ]
50  }
51
52  Matches against vgname, vgname/lvname or @tag set in *metadata*.
53  @* matches exactly against *any* tag set on the host.
54  The VG or LV only gets activated if a metadata tag matches.
55  The default if there is no match is not to activate.
56  If volume_list is not present and any tags are defined on the host
57  then it only activates if a host tag matches a metadata tag.
58  If volume_list is not present and no tags are defined on the host
59  then it does activate.
60
61Multiple config files
62=====================
63  (a) lvm.conf
64  (b) lvm_<host_tag>.conf
65
66  At startup, load lvm.conf.
67  Process tag settings.
68  If any host tags were defined, load lvm_tag.conf for each tag, if present.
69
70  When searching for a specific config file entry, search order is (b)
71  then (a), stopping at the first match.
72  Within (b) use reverse order tags got set, so file for last tag set is
73  searched first.
74  New tags set in (b) *do* trigger additional config file loads.
75
76Usage Examples
77==============
78  1) Simple activation control via metadata with static config files
79
80  lvm.conf:  (Identical on every machine - global settings)
81    tags {
82      hostname_tags = 1
83    }
84
85  From any machine in the cluster, add db1 to the list of machines that
86  activate vg1/lvol2:
87
88  lvchange --tag @db1 vg1/lvol2
89  (followed by lvchange -ay to actually activate it)
90
91
92  2) Multiple hosts.
93
94    Activate vg1 only on the database hosts, db1 and db2.
95    Activate vg2 only on the fileserver host fs1.
96    Activate nothing initially on the fileserver backup host fsb1, but be
97    prepared for it to take over from fs1.
98
99  Option (i) - centralised admin, static configuration replicated between hosts
100    # Add @database tag to vg1's metadata
101    vgchange --tag @database vg1
102
103    # Add @fileserver tag to vg2's metadata
104    vgchange --tag @fileserver vg2
105
106    lvm.conf:  (Identical on every machine)
107      tags {
108        database {
109          host_list = [ "db1", "db2" ]
110        }
111        fileserver {
112	  host_list = [ "fs1" ]
113        }
114        fileserverbackup {
115          host_list = [ "fsb1" ]
116        }
117      }
118
119      activation {
120        # Only activate if host has a tag that matches a metadata tag
121        volume_list = [ "@*" ]
122      }
123
124  In the event of the fileserver host going down, vg2 can be brought up
125  on fsb1 by running *on any node* 'vgchange --tag @fileserverbackup vg2'
126  followed by 'vgchange -ay vg2'
127
128
129  Option (ii) - localised admin & configuation
130  (i.e. each host holds *locally* which classes of volumes to activate)
131    # Add @database tag to vg1's metadata
132    vgchange --tag @database vg1
133
134    # Add @fileserver tag to vg2's metadata
135    vgchange --tag @fileserver vg2
136
137    lvm.conf:  (Identical on every machine - global settings)
138      tags {
139        hosttags = 1
140      }
141
142    lvm_db1.conf: (only needs to be on db1 - could be symlink to lvm_db.conf)
143      activation {
144        volume_list = [ "@database" ]
145      }
146
147    lvm_db2.conf: (only needs to be on db2 - could be symlink to lvm_db.conf)
148      activation {
149        volume_list = [ "@database" ]
150      }
151
152    lvm_fs1.conf: (only needs to be on fs1 - could be symlink to lvm_fs.conf)
153      activation {
154        volume_list = [ "@fileserver" ]
155      }
156
157    If fileserver goes down, to bring a spare machine fsb1 in as fileserver,
158    create lvm_fsb1.conf on fsb1 (or symlink to lvm_fs.conf):
159
160      activation {
161        volume_list = [ "@fileserver" ]
162      }
163
164    and run 'vgchange -ay vg2' or 'vgchange -ay @fileserver'
165
166