• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

cmake/H13-Sep-2021-2,9222,557

debian/H13-Sep-2021-5,1133,897

doxygen/H13-Sep-2021-2,4271,874

manpages/H03-May-2022-1,7401,479

platforms/H03-May-2022-12,6819,733

po/H13-Sep-2021-180,768145,791

scripts/H03-May-2022-3,8882,823

src/H03-May-2022-360,000242,982

CTestConfig.cmakeH A D13-Sep-2021528 1613

CTestScript.cmake.inH A D13-Sep-20211 KiB2927

LICENSEH A D13-Sep-202156.6 KiB1,164953

README.NDMPH A D13-Sep-202112.9 KiB368323

README.configsubdirectoriesH A D13-Sep-20215.5 KiB8267

README.dbconfigH A D13-Sep-20214.4 KiB121100

README.dropletH A D13-Sep-20215.5 KiB146114

README.glusterfsH A D13-Sep-20214.9 KiB152123

README.scsicryptoH A D13-Sep-202114.8 KiB324266

README.storagebackendH A D13-Sep-20211.6 KiB4234

README.NDMP

1Embedding full NDMP support in BAREOS.
2
3No filed plugin but a proper implementation with support in
4the director to act as a NDMP DMA (Data Management Application)
5and for NDMP tape agent support in the storage daemon for saving
6data using the NDMP protocol.
7
8This code is based on the NDMJOB NDMP reference implementation
9of Traakan, Inc., Los Altos, CA which has a BSD style license
10(2 clause one).
11
12http://www.traakan.com/ndmjob/index.html
13
14We imported the latest actively supported version of this
15reference NDMP code from the Amanda project. Instead of
16basing it on glib what Amanda has done we reverted some changes
17back to the way the latest spinnaker sources of the NDMJOB code
18deliver things with per OS specific code.
19
20The robot and tape simulator are rewritten versions from NDMJOB
21with support for registering callbacks in the calling code. This
22way we can implement virtual tape and robot functionality in the
23storage daemon for handling NDMP backups.
24
25There is also some code added for registering authentication
26callbacks in the calling code. This way we can perform clear
27text and md5 based authentication against the internal config
28data we keep in BAREOS native resources.
29
30The core fileindex handling code is rewritten to use callback
31functions for doing the real work. This way we can hook in
32internal functions into the core file indexing process which
33happens after a backup and before a restore to fill the
34files which have been backed up or restored.
35
36Some missing initialization, commission and decommission is
37added although it is empty it is better to have a consistent
38coding path/style for everything. Added extra destroy method
39as for some agents a decommission means make it ready for a next
40run and we want something that does a tear down and cleanup
41of anything dynamically allocated during a NDMP run.
42
43We also added support for telling the initialization upfront
44what NDMP services it should support. (e.g. DATA MANAGEMENT
45APPLICATION (DMA), DATA AGENT, TAPE AGENT or ROBOT AGENT)
46so when we accept a connection in the storage daemon via
47ndmp_tape we only allow the client to use our NDMP TAPE AGENT
48and not the ROBOT, DMA and DATA AGENT. See ndm_session structure
49members ..._agent_enabled.
50
51We also rewrote some of the internal structures. Normally
52a NDMP session is described by a so called ndm_session struct
53which is a whopping 1442392 bytes (almost 1.4 Mb) in size.
54The coders decided they would allocate each array up front
55and as such the total structure is huge. This is not very
56handy when using it as a base for a shared library as we want
57to support all agents but possibly not at the same time
58(even most likely not at the same time.)
59
60So the new ndm_session struct has pointers to the individual
61members and storage is only allocated when things are needed
62we also only allocate buffers at the time we need them not
63upfront. For things like directory names or other pathnames
64we just strdup the actual string and free it on decommission
65of the data, this saves a lot when PATH_MAX = 1024 bytes and
66you stuff a directory path of lets say 30 bytes.
67
68The DMA and DATA AGENT also keep track of an list of environment
69variables and a name list structure. In the original code the
70environment variable list can have a maximum of 1024 entries
71and 10240 entries for the name list and that is allocated as
72one big array of either 1024 or 10240 entries. This is
73madness we rewrote the list code to use a normal linked
74list so we only need the space to store the actual number
75of nodes of each list. There is a enumerate function which
76returns a memory chunk with all entries concatenated which
77is used for the rpc calls. We keep track of this enumerate
78buffer in the list descriptor and when the list is torn down
79it is freed. (We cannot free it earlier as it is needed
80as buffer for the returning rpc call.) This lingering of
81a buffer should be no problem as it should be moderate in
82size now and not the whopping 1024 or 10240 entries anymore.
83
84The media table is also rewritten to a linked list and not
85a fixed list of 40 entries as it was in the old code.
86
87This has significant size advantages to give an idea:
88
89   ndm_control_agent, original size 523000 bytes, new size 928 bytes
90   ndm_data_agent, original size 553232 bytes, new size 304 bytes
91   ndm_tape_agent, original size 263388 bytes, new size 228 bytes
92   ndm_plumbing, original size 102592 bytes, new size 20 bytes
93
94As we initialize some things now later we needed to add some
95extra checks and things may core dump due to dereferencing
96a null pointer. We decided to take that as a risk and fix
97those problems which we encounter them. Adding extra checks
98all over the place checking if things are not initialized is
99also gross overkill and as NDMP is a nice state machine we
100probably can get away by putting checks in strategic places.
101
102The test routines are put into an extra define named
103NDMOS_OPTION_NO_TEST_AGENTS so one can disable them for a
104production shared library.
105
106Extra support for getaddrinfo() is added to the library
107which supercedes the old and by POSIX deprecated gethostbyname()
108interface. Also the implementation of poll() is completed and
109we now also check the return info from poll() and set the
110channel ready flag if we detect something on a channel. This
111way the poll() handler should be on par with the select()
112based poller.
113
114The ndmjob program code is also included and you can build
115the ndmjob binary using the new shared library. Currently
116it is mostly for testing the new code in the shared library.
117
118The ndmjob program code is rewritten to also use linked list
119whereever possible without the need to completely rewrite the
120code.
121
122The NDMJOB header files are made C++ aware so we can compile
123the shared lib a pure C-code (which it essentially also is)
124and use it from BAREOS.
125
126The config engine of the director and storage daemon have been
127made aware of the NDMP protocol. Currently there is support for
128creating NDMP protocol based Backup and Restore Jobs. The storage
129resource is extended with a protocol and authentication type field
130which can be used by the NDMP DMA coded in ndmp_dma.c. A Client
131also has those two fields. When a storage daemon used in the NDMP
132backup/restore is in real life an BAREOS storage daemon an extra
133field named paired storage is part of the storage resource and is
134used by the DMA to contact the storage daemon via the native protocol
135to be able to simulate a NDMP save or restore via the normal
136BAREOS infrastructure. Via the native protocol we reserve
137things like drives etc so the virtual NDMP tape client can
138save its data, the native link is also used for things like
139getting the next volume to load etc.
140
141The job start code for backup and restore is modified to check
142for the job protocol and dispatch to the native routines when
143it is a native backup or to the NDMP routines when it is any NDMP job.
144
145The NDMP tape agent lives in ndmp_tape.c in the storage
146daemon it creates an extra listening thread which handles NDMP
147connections. Its based on the BAREOS bnet_server_thread code but
148put somewhat on a diet as for NDMP we don't need all the bells
149and whistles from the bsock class so we implemented a light weight
150ndmp connection structure. This structure is passed as handle to
151the connection handler and could be seen as local hook data and
152can be extended along the way to keep some state information on
153the NDMP session related to internal BAREOS resources.
154
155A ndmp backup configuration looks somethings like this:
156
157Configuration in bareos-dir.conf:
158
159Replace <ndmp_data_server> with the hostname of the storage device
160you are backing up e.g. the DATA AGENT in NDMP terms.
161
162#
163# Use the DUMP protocol (e.g. UNIX DUMP comparable to tar/cpio)
164# Generates FileHandle Information which can be used for single file
165# restore.
166#
167JobDefs {
168  Name = "DefaultNDMPJob"
169  Type = Backup
170  Protocol = NDMP
171  Level = Incremental
172  Client = <ndmp_data_server>-ndmp
173  Backup Format = dump
174  FileSet = "NDMP Fileset"
175  Schedule = "WeeklyCycle"
176  Storage = NDMPFile
177  Messages = Standard
178  Pool = NDMPFile
179  Priority = 10
180  Write Bootstrap = "/var/opt/bareos/run/bareos/%c.bsr"
181}
182
183#
184# A special restore Job which has the protocol set right etc.
185#
186JobDefs {
187  Name = "DefaultNDMPRestoreJob"
188  Client = <ndmp_data_server>-ndmp
189  Type = Restore
190  Protocol = NDMP
191  Backup Format = dump
192  FileSet = "NDMP Fileset"
193  Storage = NDMPFile
194  Pool = Default
195  Messages = Standard
196  Where = /
197}
198
199#
200# A NDMP Backup Job using the JobDef above.
201#
202Job {
203  Name = "BackupNDMPDump"
204  JobDefs = "DefaultNDMPJob"
205}
206
207#
208# Use the NETAPP SMTAPE protocol e.g. same protocol is used as replication protocol
209# between two NETAPP storage boxes. Doesn't allow single file restore all or nothing
210# restore of whole NETAPP volume.
211#
212Job {
213  Name = "BackupNDMPSMTape"
214  JobDefs = "DefaultNDMPJob"
215  Backup Format = smtape
216  Client = <ndmp_data_server>-ndmp
217  FileSet = "NDMP SMtape Fileset"
218}
219
220#
221# A NDMP restore Job using the JobDef above.
222#
223Job {
224  Name = "NDMPRestoreDump"
225  JobDefs = "DefaultNDMPRestoreJob"
226}
227
228#
229# A NDMP restore Job using the JobDef above but for restoring a SMTAPE type of NDMP backup.
230#
231Job {
232  Name = "NDMPRestoreSMTape"
233  JobDefs = "DefaultNDMPRestoreJob"
234  Backup Format = smtape
235  FileSet = "NDMP SMtape Restore Fileset"
236}
237
238Fileset {
239  Name = "NDMP Fileset"
240  Include {
241    Options {
242       meta = "USER=root"
243    }
244    File = /export/home/...
245  }
246}
247
248#
249# A NDMP Backup using SMPTAPE of a NetAPP storage box.
250#
251Fileset {
252  Name = "NDMP SMtape Fileset"
253  Include {
254    Options {
255       meta = "SMTAPE_DELETE_SNAPSHOT=Y"
256    }
257    File = /vol/vol1
258  }
259}
260
261#
262# A NDMP Restore using SMPTAPE of a NetAPP storage box.
263#
264Fileset {
265  Name = "NDMP SMtape Restore Fileset"
266  Include {
267    Options {
268       meta = "SMTAPE_BREAK_MIRROR=Y"
269    }
270    File = /vol/vol1
271  }
272}
273
274#
275# A NDMP Client.
276#
277Client {
278  Name = <ndmp_data_server>-ndmp
279  Address = ...
280  Port = 10000
281  Protocol = NDMPv4                   # Need to specify protocol before password as protocol determines password encoding used.
282  Auth Type = Clear                   # Clear == Clear Text, MD5 == Challenge protocol
283  Username = "ndmp"                   # username of the NDMP user on the DATA AGENT e.g. storage box being backed up.
284  Password = "test"                   # password of the NDMP user on the DATA AGENT e.g. storage box being backed up.
285}
286
287#
288# Your normal Bareos SD definition should be already in your config.
289#
290Storage {
291  Name = File
292  Address = ...                       # N.B. Use a fully qualified name here
293  SDPort = 9103
294  Password = ...
295  Device = FileStorage
296  Media Type = File
297}
298
299#
300# Same storage daemon but via NDMP protocol.
301# We link via the PairedStorage config option the Bareos SD instance definition to a NDMP TAPE AGENT.
302#
303Storage {
304  Name = NDMPFile
305  Address = ...                       # N.B. Use a fully qualified name here
306  Port = 10000
307  Protocol = NDMPv4                   # Need to specify protocol before password as protocol determines password encoding used.
308  Auth Type = Clear                   # Clear == Clear Text, MD5 == Challenge protocol
309  Username = ndmp                     # username of the NDMP user on the TAPE AGENT e.g. the Bareos SD but accessed via the NDMP protocol.
310  Password = test                     # password of the NDMP user on the TAPE AGENT e.g. the Bareos SD but accessed via the NDMP protocol.
311  Device = FileStorage
312  Media Type = File
313  PairedStorage = File
314}
315
316#
317# Your normal File based backup pool normally already defined.
318#
319Pool {
320  Name = File
321  Pool Type = Backup
322  Recycle = yes
323  AutoPrune = yes
324  Storage = File
325  Volume Retention = 365 days         # one year
326  Maximum Volume Bytes = 50G          # Limit Volume size to something reasonable
327  Maximum Volumes = 100               # Limit number of Volumes in Pool
328}
329
330#
331# Separate Pool for NDMP data so upgrading of Jobs works and selects the right storage.
332#
333Pool {
334  Name = NDMPFile
335  Pool Type = Backup
336  Recycle = yes
337  AutoPrune = yes
338  Storage = NDMPFile
339  Volume Retention = 365 days         # one year
340  Maximum Volume Bytes = 50G          # Limit Volume size to something reasonable
341  Maximum Volumes = 100               # Limit number of Volumes in Pool
342}
343
344Configuration in bareos-sd.conf:
345
346#
347# Normal SD config block, should enable the NDMP protocol here otherwise it won't listen
348# on port 10000.
349#
350Storage {
351   Name = ....
352   ...
353   NDMP Enable = yes
354}
355
356#
357# This entry gives the DMA in the Director access to the bareos SD via the NDMP protocol.
358# This option is used via the NDMP protocol to open the right TAPE AGENT connection to your
359# Bareos SD via the NDMP protocol. The initialization of the SD is done via the native protocol
360# and is handled via the PairedStorage keyword.
361#
362Ndmp {
363  Name = ...-ndmp-dma                # Can be any name but normally you should use the name of the Director here.
364  Username = ndmp                    # Same username as you specified in the NDMPFile storage definition.
365  Password = test                    # Same password as you specified in the NDMPFile storage definition.
366  AuthType = Clear                   # Clear == Clear Text, MD5 == Challenge protocol
367}
368

README.configsubdirectories

1Configuration API
2=================
3
4Naming
5------
6
7    * Components:
8        * bareos-dir
9        * bareos-sd
10        * bareos-fd
11        * bareos-traymonitor
12        * bconsole
13        * bat (only legacy config file: bat.conf)
14
15    * $COMPONENT refers to one of the listed Components.
16
17    * Legacy config file (still valid and supported, with some limitation when using the configuration API):
18        * $CONFIGDIR/$COMPONENT.conf
19
20    * $CONFIGDIR refers to the configuration directory. Bareos Linux packages use "/etc/bareos/".
21
22
23
24Changes
25-------
26
27When updating from bareos < 16.2, most of these changes are not relevant, as the legacy configuration will still be used.
28
29    * configsubdirectories
30        * if legacy config file ($CONFIGDIR/$COMPONENT.conf) not found, following wildcard path will be used to load the configuration:
31            * $CONFIGDIR/$COMPONENT.d/*/*.conf
32        * one config file per resource. Name of the config file is identical with the resource name.
33            * e.g.
34                * bareos-dir.d/director/bareos-dir.conf
35                * bareos-dir.d/pool/Full.conf
36            * There is one exception: the resource bareos-fd.d/client/myself.conf always has the file name myself.conf, while the name is normally set to the hostname of the system.
37        * the -c command line switch takes file and directories as arguments. When the argument is a directory, $COMPONENT.d/*/*.conf is added to load the configuration.
38    * additional package can contain configuration files that are automatically included
39        * However, most additional configuration resources require configuration. These configuration come as example files:
40            * $CONFIGDIR/$COMPONENT.d/$RESOURCE/$NAME.conf.example
41            * For example, the bareos-webui comes with two config resources for the bareos-director:
42                * $CONFIGDIR/bareos-director.d/profile/webui.conf
43                * $CONFIGDIR/bareos-director.d/console/user1.conf.example
44    * modified resource names:
45        * $HOSTNAME-dir => bareos-dir
46        * $HOSTNAME-sd => bareos-sd
47            * make more sense when installing just the fd. Then probaly only the Address must be changed.
48            * fits better into configsubdirectory structure and packaging,
49                because otherwise the filename is only known at install time /and might change)
50        * "Linux All" => "LinuxAll"
51            * Spaces are still valid in resource names. However, the build configuration script wasn't able to cope the file names containing spaces.
52    * bareos-traymonitor
53        * also single file per resource.
54        * bareos-traymonitor package only contains $CONFIGDIR/tray-monitor.d/monitor/bareos-mon.conf.
55            * The other resoures are part of the related packages:
56                * $CONFIGDIR/tray-monitor.d/client/FileDaemon-local.conf is part of bareos-filedaemon
57                * $CONFIGDIR/tray-monitor.d/storage/StorageDaemon-local.conf is part of bareos-storage
58                * $CONFIGDIR/tray-monitor.d/director/Director-local.conf is part of bareos-director
59            * This way, the bareos-traymonitor will be configured automatically for the installed components.
60
61How to use configsubdirectories
62-------------------------------
63
64    * Fresh installation
65        * We easiest way to start with configsubdirectories and configuration API is to start with a fresh installation.
66            * It will be useable immediatly after installation of the bareos-director.
67            * When additional packages come with example configuration files, copy them to $NAME.conf, modify it to your needs and reload the director.
68            * Attention:
69                * when you want to remove a configuration resource that has been deployed with the bareos packages, it is adviced to replace the resource config file by an empty file. This prevents that the resource reappears with a package update.
70                    * This is mainly true for RPM Bareos packages. Debian store the deployed configuration in /usr/lib/bareos/defaultconfigs/ and copies them to /etc/bareos/ only if there is no configuration. Similar for Windows.
71    * Update
72        * When update to a Bareos version containing configsubdirectories (Bareos >= 16.2), the existing configuration will not be touched and is still the default configuration.
73        * Attention: Problems can occur, if you have already splitted your configuration to the same subdirectories as used by the new packages ($CONFIGDIR/$COMPONENT.d/*/) and have implemented an own wildcard mechanism to load them. In this case, newly installed configuration resource files can alter your current configuration in adding additional resources.
74        * As long as the old configuration file ($CONFIGDIR/$COMPONENT.conf) exists, it will be used.
75        * The correct way to migrate to the new configuration scheme would be to split the configuration file into resources, store them in the reosurce directories and then remove the original configuration file.
76            * This requires efford. It is planed to create a program that helps to migrate the settings, however, until now, it is not available.
77            * The easy way is:
78                * mkdir $CONFIGDIR/$COMPONENT.d/migrate && mv $CONFIGDIR/$COMPONENT.conf $CONFIGDIR/$COMPONENT.d/migrate
79                * Resources defined in both, the new configuration directory scheme and the old configuration file must be removed from one of the places, best from the old configuration file, after verifying that the settings are identical with the new settings.
80
81
82

README.dbconfig

1On Debian based systems (Debian, Ubuntu, Univention Corporate Server),
2database configuration can be done with help of the dbconfig system.
3
4  * Package: dbconfig-common
5  * Homepage/Documentation: http://people.debian.org/~seanius/policy/dbconfig-common.html/
6
7Install/update scenarios:
8  * fresh install
9  * preinstalled 2001
10  * preinstalled 2002
11  * 12: 2001 -> 2002, 13: 2001 -> 2002: update from 12 to 13
12
13Behavior:
14  * config file: /etc/dbconfig-common/bareos.conf
15  * sql files stored at
16    * /usr/share/dbconfig-common/data/bareos-database-common/upgrade/pgsql/2001
17    * /usr/share/dbconfig-common/data/bareos-database-common/upgrade/pgsql/2002
18    * /usr/share/dbconfig-common/data/bareos-database-common/upgrade/mysql/2001
19    * /usr/share/dbconfig-common/data/bareos-database-common/upgrade/mysql/2002
20    * /usr/share/dbconfig-common/data/bareos-database-common/install/pgsql
21    * /usr/share/dbconfig-common/data/bareos-database-common/install/mysql
22
23Upgrade:
24  * bareos-database-common.postinst
25    * every file from /usr/share/dbconfig-common/data/bareos-database-common/upgrade/DATABASE/*, that is larger than parameter $2 (package version of replaced package) will be installed.
26        * even if the filename "is larger" than current package version
27        * dbconfig does not store the installed version. It uses only the old and the current package version.
28    * in Bareos, different branches can each do a database version update, example:
29      * 12.4.6: 2001
30      * 12.4.7: 2002
31      * 12.4.8: 2003
32      * 13.2.2: 2001
33      * 13.2.3: 2002
34      * 13.2.4: 2003
35      * using standard dbconfig this could result in following
36        * updating from 12.4.6 to 13.2.3 would result in a database update from
37          * 2001 (12.4.6) -> 2002 (12.4.7) -> 2003 (12.4.8) -> 2001 (13.2.2) ... => failure
38    * Bareos modifies the dbconfig behavior by not working with package versions, but database versions:
39      * bareos-database-common.config, bareos-database-common.postinst:
40        * instead of passing parameter $2 (old package version), this gots translated to database version with the help of a map file (versions.map).
41           * with the help of this, every database schema update is only be done once
42    * how to handle package update from version without dbconfig to version with it?
43      Bareos dbconfig will be introduced with some version >= 14.1.0.
44      The latest database version for 12.4 and 13.2 will be 2002.
45      Therefore we claim, that the first version using db_config will be 2003, even if it is only 2002. Using this, all existing database updates get applied.
46        if dpkg --compare-versions "$param2_orig" lt "14.1.0"; then
47            dbc_first_version="2003"
48            ...
49        fi
50
51Database Permissions by dbconfig:
52
53MySQL:
54    GRANT USAGE ON *.* TO 'bareos'@'localhost' IDENTIFIED BY PASSWORD '*3E80BB05233BE488EE70C1D6494E2F2DB00FEBB4'
55    GRANT ALL PRIVILEGES ON `bareos`.* TO 'bareos'@'localhost'
56
57PostgreSQL:
58    bareos will be the database owner
59
60Testing:
61# bareos-database-dbconfig
62#   ~/dbconf
63#     fakeroot debian/rules binary
64#  /var/lib/dpkg/info/*.postinst ...
65/var/log/dbconfig-common/dbc.log
66
67eval "`dbconfig-generate-include /etc/dbconfig-common/bareos-database-common.conf`"
68
69
70
71Behavior
72========
73
74noninteractive
75==============
76
77export DEBIAN_FRONTEND=noninteractive
78echo "bareos-database-common bareos-database-common/mysql/admin-pass select linuxlinux" | debconf-set-selections
79
80postgresql
81==========
82
83  * install
84    * /etc/dbconfig-common/bareos-database-common.conf created
85    * db: setup
86  * update from 12.4:
87    * updates db_version from 2001 to 2002.
88  * update from 13.2:
89    * db_version is already 2002. It detects, nothing to do.
90  * update dbconfig already configured
91    * ?
92
93mysql
94=====
95
96  * install
97    * /etc/dbconfig-common/bareos-database-common.conf created
98    * db: setup
99    * dbpass must be set to bareos-dir.conf
100  * update from 12.4:
101    * updates db_version from 2001 to 2002.
102  * update from 13.2:
103    * db_version is already 2002. It detects, nothing to do.
104  * update dbconfig already configured
105    * ?
106
107sqlite3
108=======
109
110  * install
111    * /etc/dbconfig-common/bareos-database-common.conf created
112    * db: setup
113    * creates link from /var/lib/bareos/bareos.db to bareos
114  * update from 12.4:
115    * updates db_version from 2001 to 2002.
116  * update from 13.2:
117    * db_version is already 2002. It detects, nothing to do.
118    * creates link from /var/lib/bareos/bareos.db to bareos
119  * update dbconfig already configured
120    * ?
121

README.droplet

1Using droplet S3 as a backingstore for backups.
2
3The droplet S3 storage backend writes chunks of data in an S3 bucket.
4
5For this you need to install the the bareos-storage-droplet packages which contains
6the libbareossd-chunked*.so  and  libbareossd-droplet*.so shared objects and the droplet storage backend which implements a dynamic loaded storage backend.
7
8In the following example all the backup data is placed in the "bareos-backup" bucket on the defined S3 storage.
9A volume is a sub-directory in the defined bucket, and every chunk is placed in the Volume directory with the filename 0000-9999 and a size that is defined in the chunksize.
10
11The droplet S3 can only be used with virtual-hosted-style buckets like http://<bucket>.<s3_server>/object
12Path-style buckets are not supported when using the droplet S3.
13
14On the Storage Daemon the following configuration is needed.
15Example bareos-sd.d/device/S3_ObjectStorage.conf file:
16
17Device {
18  Name = S3_ObjectStorage
19  Media Type = S3_Object1
20  Archive Device = S3 Object Storage
21
22  #
23  # Device Options:
24  #    profile= - Droplet profile path, e.g. /etc/bareos/bareos-sd.d/droplet/droplet.profile
25  #    location= - AWS location (e.g. us-east etc.). Optional.
26  #    acl= - Canned ACL
27  #    storageclass= - Storage Class to use.
28  #    bucket= - Bucket to store objects in.
29  #    chunksize= - Size of Volume Chunks (minimum = default = 10 Mb)
30  #    iothreads= - Number of IO-threads to use for upload (use blocking uploads if not defined)
31  #    ioslots= - Number of IO-slots per IO-thread (0-255, default 10)
32  #    retries= - Number of retires if a write fails (0-255, default = 0, which means unlimited retries)
33  #    mmap - Use mmap to allocate Chunk memory instead of malloc().
34  #
35
36  # testing:
37  Device Options = "profile=/etc/bareos/bareos-sd.d/droplet/droplet.profile,bucket=bareos-bucket,chunksize=100M,iothreads=0,retries=1"
38
39  # performance:
40  #Device Options = "profile=/etc/bareos/bareos-sd.d/droplet/droplet.profile,bucket=bareos-bucket,chunksize=100M"
41
42  Device Type = droplet
43  LabelMedia = yes                    # lets Bareos label unlabeled media
44  Random Access = yes
45  AutomaticMount = yes                # when device opened, read it
46  RemovableMedia = no
47  AlwaysOpen = no
48  Description = "S3 device"
49  Maximum File Size = 500M            # 500 MB (allows for seeking to small portions of the Volume)
50  Maximum Concurrent Jobs = 1
51  Maximum Spool Size = 15000M
52}
53
54The droplet.profile file holds the credentials for S3 storage
55Example /etc/bareos/bareos-sd.d/droplet/droplet.profile file:
56
57Make sure the file is only readable for bareos, credentials for S3 are listed here.
58
59Config options profile:
60
61use_https = True
62host = <FQDN for S3>
63access_key = <S3 access key>
64secret_key = <S3 secret key>
65pricing_dir = ""
66backend = s3
67aws_auth_sign_version = 2
68
69If the pricing_dir is not empty,
70it will create an <profile_directory>/droplet.csv file which will record all S3 operations.
71See the code at https://github.com/bareos/Droplet/blob/bareos-master/libdroplet/src/pricing.c for an explanation.
72
73The parameter "aws_auth_sign_version = 2" is for the connection to a CEPH S3 gateway.
74For use with AWS S3 the aws_auth_sign_version, must be set to "4".
75
76On the Director you connect to the Storage Device with the following configuration
77Example bareos-dir.d/storage/S3_1-00.conf file:
78
79Storage {
80  Name = S3_Object
81  Address  = "Replace this by the Bareos Storage Daemon FQDN or IP address"
82  Password = "Replace this by the Bareos Storage Daemon director password"
83  Device = S3_ObjectStorage
84  Media Type = S3_Object1
85}
86
87
88Troubleshooting
89===============
90
91S3 Backend Unreachable
92----------------------
93
94The droplet device can run in two modes:
95  * direct writing (iothreads  = 0)
96  * cached writing (iothreads >= 1)
97
98If iothreads >= 1, retries = 0 (unlimited retries) and the droplet backend (e.g. S3 storage) is not available, a job will continue running until the backend problem is fixed.
99If this is the case and the job is canceled, it will only be canceled on the Director. It continues running on the Storage Daemon, until the S3 backend is available again or the Storage Daemon itself is restarted.
100
101If iothreads >= 1, retries != 0 and the droplet backend (e.g. S3 storage) is not available, write operation will be silently discarded after the specified number of retries.
102*Don't use this combination of options*.
103
104Caching when S3 backend is not available:
105This behaviour have not changed, but I fear problems can arise, if the backend is not available and all write operations are stored in memory.
106
107The status of the cache can be determined with the "status storage=..." command.
108
109
110Pending IO chunks (and inflight chunks):
111```
112...
113Device "S3_ObjectStorage" (S3) is mounted with:
114    Volume:      Full-0085
115    Pool:        Full
116    Media type:  S3_Object1
117Backend connection is working.
118Inflight chunks: 2
119Pending IO flush requests:
120   /Full-0085/0002 - 10485760 (try=0)
121   /Full-0085/0003 - 10485760 (try=0)
122   /Full-0085/0004 - 10485760 (try=0)
123...
124Attached Jobs: 175
125...
126```
127
128If try > 0, problems did already occur. The system will continue retrying.
129
130
131Status without pending IO chunks:
132```
133Device "S3_ObjectStorage" (S3) is mounted with:
134    Volume:      Full-0084
135    Pool:        Full
136    Media type:  S3_Object1
137Backend connection is working.
138No Pending IO flush requests.
139Configured device capabilities:
140  EOF BSR BSF FSR FSF EOM !REM RACCESS AUTOMOUNT LABEL !ANONVOLS !ALWAYSOPEN
141Device state:
142  OPENED !TAPE LABEL !MALLOC APPEND !READ EOT !WEOT !EOF !NEXTVOL !SHORT MOUNTED
143  num_writers=0 reserves=0 block=8
144Attached Jobs:
145```
146

README.glusterfs

1Integrating GlusterFS and BAREOS.
2
3The gluster integration uses so called URIs that are mostly analog to the
4way you specify gluster volumes in for example qemu.
5
6Syntax:
7
8gluster[+transport]://[server[:port]]/volname[/dir][?socket=...]
9
10'gluster' is the protocol.
11
12'transport' specifies the transport type used to connect to gluster
13management daemon (glusterd). Valid transport types are tcp, unix
14and rdma. If a transport type isn't specified, then tcp type is assumed.
15
16'server' specifies the server where the volume file specification for
17the given volume resides. This can be either hostname, ipv4 address
18or ipv6 address. ipv6 address needs to be within square brackets [ ].
19If transport type is 'unix', then 'server' field should not be specifed.
20The 'socket' field needs to be populated with the path to unix domain
21socket.
22
23'port' is the port number on which glusterd is listening. This is optional
24and if not specified, QEMU will send 0 which will make gluster to use the
25default port. If the transport type is unix, then 'port' should not be
26specified.
27
28'volname' is the name of the gluster volume which contains the data that
29we need to be backup.
30
31'dir' is an optional base directory on the 'volname'
32
33Examples:
34
35gluster://1.2.3.4/testvol[/dir]
36gluster+tcp://1.2.3.4/testvol[/dir]
37gluster+tcp://1.2.3.4:24007/testvol[/dir]
38gluster+tcp://[1:2:3:4:5:6:7:8]/testvol[/dir]
39gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol[/dir]
40gluster+tcp://server.domain.com:24007/testvol[/dir]
41gluster+unix:///testvol[/dir]?socket=/tmp/glusterd.socket
42gluster+rdma://1.2.3.4:24007/testvol[/dir]
43
44- Using glusterfs as backingstore for backups
45
46  For this you need to install the storage-glusterfs package which contains
47  the libbareossd-gfapi.so shared object which implements a dynamic loaded
48  storage backend.
49
50  - bareos-sd.conf
51
52  Device {
53    Name = GlusterStorage
54    Device Type = gfapi
55    Media Type = GlusterFile
56    Archive Device = <some description>
57    Device Options = "uri=gluster://<volume_spec>"
58    LabelMedia = yes
59    Random Access = Yes
60    AutomaticMount = yes
61    RemovableMedia = no
62    AlwaysOpen = no
63  }
64
65  - bareos-dir.conf
66
67  Storage {
68    Name = Gluster
69    Address = <sd-hostname>
70    Password = "<password>"
71    Device = GlusterStorage
72    Media Type = GlusterFile
73    Maximum Concurrent Jobs = 10
74  }
75
76  In the above GlusterFile is a sample MediaType if you use multiple Gluster storage
77  devices in one storage daemon make sure you use different Media Types for each
78  storage as the same Media Type means BAREOS thinks it can swap volumes between
79  different storages which is probably not going to work if you point to different
80  locations on your Gluster volume(s).
81
82- Backing up data on a Gluster Volume
83
84  For this you need to install the filedaemon-glusterfs-plugin package which contains
85  the gfapi-fd.so shared object which implements a dynamic loaded filed plugin.
86
87  For backing up Gluster the gfapi-fd.so implements two types of backups one in which
88  it will crawl the filesystem itself and one where it uses glusterfind to create a
89  list of files to backup. The glusterfind script is new since Gluster 3.7 so you need
90  a version that includes it to be able to use it.
91
92  A backup made using the gfapi-fd.so needs a Job and Fileset definition. For the local
93  crawling example this would look somewhat like this:
94
95  Job {
96    Name = "gfapitest"
97    JobDefs = "DefaultJob"
98    FileSet = "GFAPI"
99  }
100
101  FileSet {
102    Name = "GFAPI"
103    Include {
104      Options {
105        aclsupport = yes
106        xattrsupport = yes
107        signature = MD5
108      }
109      Plugin = "gfapi:volume=gluster\\://<volume_spec>:"
110    }
111  }
112
113  When you want to use glusterfind you should start by reading the documentation on glusterfind
114  and the setup it needs before configuring things in BAREOS. You need to create at least a new
115  session for glusterfind that you are going to use for backing up the gluster volume. There is
116  a wrapper named bareos-glusterfind-wrapper which is used in the pre and post definition of a
117  backup Job to interact with glusterfind.
118
119  For a backup with glusterfind the FileSet and Job will look a little different then the one
120  above as you need to interact with glusterfind to get the actual filelist that the backup should use.
121
122  Job {
123    Name = "gfapitest"
124    JobDefs = "DefaultJob"
125    FileSet = "GFAPI"
126    RunScript {
127      Runs On Client = Yes
128      RunsWhen = Before
129      FailJobOnError = Yes
130      Command = "<bareos_script_dir>/bareos-glusterfind-wrapper -l %l -s %m prebackup <volume_name> <output_file>"
131    }
132    RunScript {
133      Runs On Success = Yes
134      RunsOnFailure = Yes
135      Runs On Client = Yes
136      RunsWhen = After
137      Command = "<bareos_script_dir>/bareos-glusterfind-wrapper postbackup <volume_name> <output_file>"
138    }
139  }
140
141  FileSet {
142    Name = "GFAPI"
143    Include {
144      Options {
145        aclsupport = yes
146        xattrsupport = yes
147        signature = MD5
148      }
149      Plugin = "gfapi:volume=gluster\\://<volume_spec>:gffilelist=<output_file>"
150    }
151  }
152

README.scsicrypto

1Modern tape drives, e.g. LTO >= 4, support hardware encryption.
2
3There are several ways of using encryption with these drives
4The following three types of key management are available for
5doing encryption. The transmission of the keys to the volumes
6is accomplished by:
7
8- A backup application that supports Application Managed Encryption (AME)
9- A tape library that supports Library Managed Encryption (LME)
10- Using a Key Management Appliance (KMA).
11
12We added support for Application Managed Encryption (AME) scheme where
13on labeling a crypto key is generated for a volume and when the volume
14is mounted the crypto key is loaded and when unloaded the key is cleared
15from the memory of the Tape Drive using the SCSI SPOUT command set.
16
17If you have implemented Library Managed Encryption (LME) or
18a Key Management Appliance (KMA) there is no need to have support
19from Bareos on loading and clearing the encryption keys as either
20the Library knows the per volume encryption keys itself or it
21will ask the KMA for the encryption key when it needs it. For
22big installations you might consider using a KMA but the Application
23Managed Encryption implemented in Bareos should also scale rather
24well and has low overhead as the keys are only loaded and cleared
25when needed.
26
27How does it all work:
28
29- the libbareos library has some new features:
30   - crypto_wrap.c  - Implements a RFC3394 based wrapping of crypto keys
31   - crypto_cache.c - Implements a cache of wrapped crypto keys used when
32                      we cannot ask the director for the key e.g. on
33                      startup of the storage daemon.
34   - passphrase.c   - Implements generation of semi-random passphrases
35   - scsi_lli.c     - Implements a lowlevel interface to the tape drive for
36                      several ioctl interfaces available on some modern UNIX
37                      platforms. Current supported platforms are:
38                         - Linux (SG_IO ioctl interface) (tested)
39                         - Solaris (USCSI ioctl interface) (tested)
40                         - FreeBSD (libcam interface)
41                         - NetBSD (SCIOCCOMMAND ioctl interface)
42                         - OpenBSD (SCIOCCOMMAND ioctl interface)
43   - scsi_crypto.c  - Implements sending of SCSI Security Protocol IN (SPIN)
44                      and SCSI Security Protocol OUT (SPOUT) pages using
45                      the scsi_lli interface.
46
47- A new tool named bscrypto allows you to manipulate the tape drive.
48  It is mostly used for Disaster Recovery (DR) purposes. The storage
49  daemon and the btools (bls, bextract, bscan, btape, bextract) will
50  use a so called storage daemon plugin to perform the setting and
51  clearing of the encryption keys. To bootstrap the encryption support
52  and for populating things like the crypto cache with encryption keys
53  of volumes that you want to scan you need to use the bscrypto tool.
54
55  The bscrypto tools has the following capabilities:
56     - Generate a new passphrase
57        - to be used as a so called Key Encryption Key (KEK) for
58          wrapping a passphrase using RFC3394 key wrapping with aes-wrap
59          - or -
60        - for usage as a clear text encryption key loaded into the tape drive.
61     - Base64 encode a key if requested
62     - Generate a wrapped passphrase which performs the following steps:
63        - generate a semi random clear text passphrase
64        - wrap the passphrase using the Key Encryption Key using RFC3394
65        - base64 encode the wrapped key (as the wrapped key is binary, we
66          always need to base64-encode it in order to be able to pass the
67          data as part of the director to storage daemon protocol
68     - show the content of a wrapped or unwrapped keyfile
69       This can be used to reveal the content of the passphrase when
70       a passphrase is stored in the database and you have the urge to
71       change the Key Encryption Key. Normally I would urge people to not
72       change their Key Encryption Key as this means that you have to redo
73       all your stored encryption keys as they are stored in the database
74       wrapped using the Key Encryption Key available in the config during
75       the label phase of the volume
76     - Clear the crypto cache on the machine running the bareos-sd which keeps
77       a cache of used encryption keys which can be used when the bareos-sd is
78       restarted without the need to connect to the bareos-dir to retrieve the
79       encryption keys.
80     - Set the encryption key of the drive
81     - Clear the encryption key of the drive
82     - Show the encryption status of the drive
83     - Show the encryption status of the next block (e.g. volume)
84     - Populate the crypto cache with data
85
86- A new storage daemon plugin is added named scsicrypto-sd which
87  hooks into the "unload", "label read", "label write" and "label verified"
88  events for loading and clearing the key. It checks the drive if it
89  needs to clear it by either using a internal state if it loaded
90  a key before or when enabled via a special option which first issues
91  an encryption status query. When there is a connection to the director
92  and the volume information is not available it will ask the director
93  for the data on the currently loaded volume. When no connection is
94  available a cache is used which should contain the most recently mounted
95  volumes. When an encryption key is available it is loaded into the
96  drives memory.
97
98- The director is extended with additional code for handling
99  hardware data encryption. On a label of a volume the extra
100  keyword "encrypt" will force the director to generate a new
101  semi random passphrase for the volume and this passphrase
102  is stored in the database as part of the media information.
103
104  A passphrase is always stored in the database base64 encoded
105  and when a so called Key Encryption Key is set in the config
106  of the director the passphrase is first wrapped using RFC3394
107  key wrapping and then base64 encoded. By using key wrapping
108  the keys in the database are save against people sniffing
109  the info as the data is still encrypted using the Key
110  Encryption Key (which in essence is just an extra passphrase
111  of the same length as the volume passphrases used)
112
113  When the storage daemon needs to mount the volume it
114  will ask the director for the volume information and
115  that protocol is extended with the exchange of the
116  base64 wrapped encryption key (passphrase). The storage
117  daemon has an extra config option in which it records
118  the Key Encryption Key of the particular director and
119  as such can unwrap the key sended into the original
120  passphrase.
121
122  As can be seen from the above info we don't allow the
123  user to enter a passphrase but generate a semi random
124  passphrase using the openssl random functions (if available)
125  and convert that into a readable ASCII stream of letters,
126  numbers and most other characters other than the quotes
127  and space etc. This will give much stronger passphrase than
128  when requesting the info from a user, as we store things in
129  the database the user never has to enter these passphrases.
130
131  The volume label is written unencrypted to the volume so
132  we can always recognize a Bareos volume. When the key is
133  loaded onto the drive we set the decryption mode to mixed
134  so we can read both unencrypted and encrypted data from the
135  volume. When there is no key loaded or the wrong key is
136  loaded the drive will give an IO error when trying to
137  read the volume.
138
139  For disaster recovery you can store the Key Encryption Key
140  and the content of the wrapped encryption keys somewhere save
141  and the bscrypto tool together with the scsicrypto-sd plugin
142  can be used to get access to your volumes when you ever lose
143  your complete environment.
144
145  When you don't want to use the scsicrypto-sd plugin when
146  doing DR and you are only reading one volume you can also
147  set the crypto key using the bscrypto tool because we use
148  the mixed decryption mode you can set the right encryption
149  key before reading the volume label as in mixed mode you can
150  read both encrypted and unencrypted data from a volume.
151  When you need to read more then one volume you better use
152  the scsicrypto-sd plugin with things like bscan/bextract
153  as the plugin will then auto load the correct encryption
154  key when it loads the volume just as what the storage
155  daemon does when performing backups and restores.
156
157  The volume label is unencrypted so a volume can also be
158  recognized by a non encrypted installation but it won't be
159  able to read the actual data from it. Using an encrypted
160  volume label doesn't add much security (there is no security
161  related info in the volume label anyhow) and it makes it
162  harder to recognize either a labeled volume with encrypted
163  data v.s. a unlabeled new volume (both would return an
164  IO-error on read of the label.)
165
166The initial setup of SCSI crypto looks something like this:
167
168- Run configure with your normal configure options and
169  add the --enable-scsi-crypto option.
170- Build the Bareos programs and package/install them
171  in the normal way.
172- Generate a Key Encryption Key e.g.
173     bscrypto -g -
174
175  === Security Setup ===
176
177   Some security levels need to be increased for the storage
178   daemon to be able to use the low level SCSI interface for
179   setting and getting encryption status on a tape device.
180
181   The following additional security is needed for the following
182   operating systems:
183
184     - Linux (SG_IO ioctl interface)
185
186       The user running the storage daemon needs the following
187       additional capabilities:
188          CAP_SYS_RAWIO (See capabilities(7))
189
190       If bareos-sd does not have the appropriate capabilities, all
191       other tape operations may still work correctly, but you will
192       get "Unable to perform SG_IO ioctl" errors.
193
194       On older kernels it can be you need CAP_SYS_ADMIN try
195       CAP_SYS_RAWIO first and if that doesn't work try CAP_SYS_ADMIN
196
197       When you are running the storage daemon as another user
198       then root (which has the CAP_SYS_RAWIO capability) you
199       need to add it to the current set of capabilities.
200
201       When you are using systemd you could add this additional
202       capability to the CapabilityBoundingSet parameter.
203
204       For systemd add to the bareos-sd.service the following:
205
206       Capabilities=cap_sys_rawio+ep
207
208       You can also setup the extra capability on bscrypto and
209       bareos-sd by running the following cmd:
210
211       # setcap cap_sys_rawio=ep bscrypto
212       # setcap cap_sys_rawio=ep bareos-sd
213
214       Check the setting with
215
216       # getcap -v bscrypto
217       # getcap -v bareos-sd
218
219       getcap and setcap are part of libcap-progs which may not be
220       installed on your system.
221
222     - Solaris (USCSI ioctl interface)
223
224       The user running the storage daemon needs the following
225       additional privileges:
226          PRIV_SYS_DEVICES (See privileges(5))
227
228       When you are running the storage daemon as another user
229       then root (which has the PRIV_SYS_DEVICES privilege) you
230       need to add it to the current set of privileges.
231
232       This can be setup by setting this either as a project for
233       the user or as a extra privileges in the SMF definition
234       starting the storage daemon. The SMF setup is the cleanest.
235
236       For SMF make sure you have something like this in the instance
237       block.
238
239       <method_context working_directory=":default">
240          <method_credential user="bareos" group="bareos" privileges="basic,sys_devices"/>
241       </method_context>
242
243  === Changes in bareos-sd.conf ===
244
245- Put the Key Encryption Key into the bareos-sd.conf
246  under the director entry in that config for the specific
247  director you are creating the config for. e.g.
248  Key Encryption Key = "<passphrase>"
249- Enable the loading of storage daemon plugins by
250  setting the plugin dir in the bareos-sd.conf e.g.
251  Plugin Directory = <path_to_sd_plugins>
252- Enable the SCSI encryption option in the device configuration
253  section of the drive in the bareos-sd.conf. e.g.
254  Drive Crypto Enabled = Yes
255- When you want the plugin to probe the drive for its encryption
256  status if it needs to clear a pending key enable the Query
257  Crypto Status option in the device configuration section of the
258  drive in the bareos-sd.conf e.g.
259  Query Crypto Status = Yes
260
261  === Changes in bareos-dir.conf ===
262
263- Put the Key Encryption Key into the bareos-dir.conf under
264  the director config item named Key Encryption Key e.g.
265  Key Encryption Key = "<passphrase>"
266
267- restart sd and dir
268- Label a volume with the encrypt option e.g.
269  label slots=1-5 barcodes encrypt
270
271For Disaster Recovery (DR) you need the following information:
272
273- Actual bareos-sd.conf with config options enabled as described
274  above, including things like a definition of a director with
275  the Key Encryption Key used for creating the encryption keys
276  of the volumes.
277- The actual keys used for encryption of the volumes.
278
279  This data needs to be available as a so called crypto cache
280  file which is used by the plugin when no connection to the
281  director can be made to do a lookup (most likely on DR).
282
283  Most of the times the needed information e.g. bootstrap info
284  is available on recently written volumes and most of the
285  time the encryption cache will contain the most recent data
286  so a recent copy of the bareos-sd.<portnr>.cryptoc file in
287  the workingdir is most of the time enough. You can also save
288  the info from database in a save place and use bscrypto to
289  populate this info (VolumeName<tab>EncryptKey) into the crypto
290  cache file used by bextract and bscan. You can use bscrypto
291  with the following flags to create a new or update an existing
292  crypto cache file e.g. :
293
294  # bscrypto -p /var/lib/bareos/bareos-sd.<portnr>.cryptoc
295
296- A valid BSR file with the location of the last save of the
297  database makes recovery much easier. Adding a post script
298  to the database save job could collect the needed info and
299  make sure its stored somewhere safe.
300- Recover the database in the normal way e.g. for postgresql:
301
302  # bextract -D <director_name> -c bareos-sd.conf -V <volname> \
303             /dev/nst0 /tmp -b bootstrap.bsr
304  # /usr/lib64/bareos/create_bareos_database
305  # /usr/lib64/bareos/grant_bareos_privileges
306  # psql bareos < /tmp/var/lib/bareos/bareos.sql
307
308  Or something similar (change paths to follow where you
309  installed the software or where the package put it.)
310
311NOTE: As described at the beginning of this README there are different
312      types of key management:
313
314- A backup application that supports Application Managed Encryption (AME)
315- A tape library that supports Library Managed Encryption (LME)
316- Using a Key Management Appliance (KMA).
317
318If the Library is setup for LME or KMA it probably won't allow our AME setup
319and the scsi-crypto plugin will fail to set/clear the encryption key. To be
320able to use AME you need to "Modify Encryption Method" and set it to something
321like "Application Managed". If you decide to use LME or KMA you don't have to
322bother with the whole setup of AME which may for big libraries be easier, although
323the overhead of using AME even for very big libraries should be minimal.
324

README.storagebackend

1Adding additional storage backends:
2
3- Creating a new backend:
4   - Create a new derived class of DEVICE e.g.
5
6      class whatever_device: public DEVICE {
7      private:
8         POOLMEM *m_virtual_filename;
9         boffset_t m_offset;
10
11      public:
12         whatever_device();
13         ~whatever_device();
14
15         /*
16          * Interface from DEVICE
17          */
18         int d_close(int);
19         int d_open(const char *pathname, int flags, int mode);
20         int d_ioctl(int fd, ioctl_req_t request, char *mt = NULL);
21         boffset_t d_lseek(DCR *dcr, boffset_t offset, int whence);
22         ssize_t d_read(int fd, void *buffer, size_t count);
23         ssize_t d_write(int fd, const void *buffer, size_t count);
24         bool d_truncate(DCR *dcr);
25      };
26
27      In file src/stored/backends/whatever_device.h
28   - Create a new class implementing the pure virtual methods.
29      In file src/stored/backends/whatever_device.c
30
31     There are plenty of examples.
32   - Add build rules to src/stored/backends/Makefile.in
33   - Add new backend to AVAILABLE_DEVICE_API_SRCS in src/stored/Makefile.in for non dynamic loading.
34
35- Glue code for loading and using the new backend
36   - Add new enum value to Device types enum in src/stored/dev.h
37   - Add new enum value to is_file() method of DEVICE class.
38   - Add new enum mapping to dev_types array in src/stored/stored_conf.c
39   - For static allocation of new backend add code to m_init_dev() in src/stored/dev.c
40     (In switch (device->dev_type) which dispatches based on device type)
41   - Add mapping for dynamic loading of backend to src/stored/sd_backends.h
42