1.. include:: global.rst.inc
2.. highlight:: none
3.. _faq:
4
5Frequently asked questions
6==========================
7
8Usage & Limitations
9###################
10
11What is the difference between a repo on an external hard drive vs. repo on a server?
12-------------------------------------------------------------------------------------
13
14If Borg is running in client/server mode, the client uses SSH as a transport to
15talk to the remote agent, which is another Borg process (Borg is installed on
16the server, too). The Borg server is doing storage-related low-level repo
17operations (get, put, commit, check, compact), while the Borg client does the
18high-level stuff: deduplication, encryption, compression, dealing with
19archives, backups, restores, etc., which reduces the amount of data that goes
20over the network.
21
22When Borg is writing to a repo on a locally mounted remote file system, e.g.
23SSHFS, the Borg client only can do file system operations and has no agent
24running on the remote side, so *every* operation needs to go over the network,
25which is slower.
26
27Can I backup from multiple servers into a single repository?
28------------------------------------------------------------
29
30Yes, this is *possible* from the technical standpoint, but it is
31*not recommended* from the security perspective. |project_name| is
32built upon a defined :ref:`attack_model` that cannot provide its
33guarantees for multiple clients using the same repository. See
34:ref:`borg_security_critique` for a detailed explanation.
35
36Also, in order for the deduplication used by |project_name| to work, it
37needs to keep a local cache containing checksums of all file
38chunks already stored in the repository. This cache is stored in
39``~/.cache/borg/``.  If |project_name| detects that a repository has been
40modified since the local cache was updated it will need to rebuild
41the cache. This rebuild can be quite time consuming.
42
43So, yes it's possible. But it will be most efficient if a single
44repository is only modified from one place. Also keep in mind that
45|project_name| will keep an exclusive lock on the repository while creating
46or deleting archives, which may make *simultaneous* backups fail.
47
48Can I copy or synchronize my repo to another location?
49------------------------------------------------------
50
51If you want to have redundant backup repositories (preferably at separate
52locations), the recommended way to do that is like this:
53
54- ``borg init repo1``
55- ``borg init repo2``
56- client machine ---borg create---> repo1
57- client machine ---borg create---> repo2
58
59This will create distinct repositories (separate repo ID, separate
60keys) and nothing bad happening in repo1 will influence repo2.
61
62Some people decide against above recommendation and create identical
63copies of a repo (using some copy / sync / clone tool).
64
65While this might be better than having no redundancy at all, you have
66to be very careful about how you do that and what you may / must not
67do with the result (if you decide against our recommendation).
68
69What you would get with this is:
70
71- client machine ---borg create---> repo
72- repo ---copy/sync---> copy-of-repo
73
74There is no special borg command to do the copying, you could just
75use any reliable tool that creates an identical copy (cp, rsync, rclone
76might be options).
77
78But think about whether that is really what you want. If something goes
79wrong in repo, you will have the same issue in copy-of-repo.
80
81Make sure you do the copy/sync while no backup is running, see
82:ref:`borg_with-lock` about how to do that.
83
84Also, you must not run borg against multiple instances of the same repo
85(like repo and copy-of-repo) as that would create severe issues:
86
87- Data loss: they have the same repository ID, so the borg client will
88  think they are identical and e.g. use the same local cache for them
89  (which is an issue if they happen to be not the same).
90  See :issue:`4272` for an example.
91- Encryption security issues if you would update repo and copy-of-repo
92  independently, due to AES counter reuse.
93
94There is also a similar encryption security issue for the disaster case:
95If you lose repo and the borg client-side config/cache and you restore
96the repo from an older copy-of-repo, you also run into AES counter reuse.
97
98"this is either an attack or unsafe" warning
99--------------------------------------------
100
101About the warning:
102
103  Cache, or information obtained from the security directory is newer than
104  repository - this is either an attack or unsafe (multiple repos with same ID)
105
106"unsafe": If not following the advice from the previous section, you can easily
107run into this by yourself by restoring an older copy of your repository.
108
109"attack": maybe an attacker has replaced your repo by an older copy, trying to
110trick you into AES counter reuse, trying to break your repo encryption.
111
112If you'ld decide to ignore this and accept unsafe operation for this repository,
113you could delete the manifest-timestamp and the local cache:
114
115::
116
117  borg config repo id   # shows the REPO_ID
118  rm ~/.config/borg/REPO_ID/manifest-timestamp
119  borg delete --cache-only REPO
120
121This is an unsafe and unsupported way to use borg, you have been warned.
122
123Which file types, attributes, etc. are *not* preserved?
124-------------------------------------------------------
125
126    * UNIX domain sockets (because it does not make sense - they are
127      meaningless without the running process that created them and the process
128      needs to recreate them in any case). So, don't panic if your backup
129      misses a UDS!
130    * The precise on-disk (or rather: not-on-disk) representation of the holes
131      in a sparse file.
132      Archive creation has no special support for sparse files, holes are
133      backed up as (deduplicated and compressed) runs of zero bytes.
134      Archive extraction has optional support to extract all-zero chunks as
135      holes in a sparse file.
136    * Some filesystem specific attributes, like btrfs NOCOW, see :ref:`platforms`.
137    * For hardlinked symlinks, the hardlinking can not be archived (and thus,
138      the hardlinking will not be done at extraction time). The symlinks will
139      be archived and extracted as non-hardlinked symlinks, see :issue:`2379`.
140
141Are there other known limitations?
142----------------------------------
143
144- A single archive can only reference a limited volume of file/dir metadata,
145  usually corresponding to tens or hundreds of millions of files/dirs.
146  When trying to go beyond that limit, you will get a fatal IntegrityError
147  exception telling that the (archive) object is too big.
148  An easy workaround is to create multiple archives with fewer items each.
149  See also the :ref:`archive_limitation` and :issue:`1452`.
150
151  :ref:`borg_info` shows how large (relative to the maximum size) existing
152  archives are.
153- borg extract only supports restoring into an empty destination. After that,
154  the destination will exactly have the contents of the extracted archive.
155  If you extract into a non-empty destination, borg will (for example) not
156  remove files which are in the destination, but not in the archive.
157  See :issue:`4598` for a workaround and more details.
158
159.. _checkpoints_parts:
160
161If a backup stops mid-way, does the already-backed-up data stay there?
162----------------------------------------------------------------------
163
164Yes, |project_name| supports resuming backups.
165
166During a backup a special checkpoint archive named ``<archive-name>.checkpoint``
167is saved every checkpoint interval (the default value for this is 30
168minutes) containing all the data backed-up until that point.
169
170This checkpoint archive is a valid archive,
171but it is only a partial backup (not all files that you wanted to backup are
172contained in it). Having it in the repo until a successful, full backup is
173completed is useful because it references all the transmitted chunks up
174to the checkpoint. This means that in case of an interruption, you only need to
175retransfer the data since the last checkpoint.
176
177If a backup was interrupted, you normally do not need to do anything special,
178just invoke ``borg create`` as you always do. If the repository is still locked,
179you may need to run ``borg break-lock`` before the next backup. You may use the
180same archive name as in previous attempt or a different one (e.g. if you always
181include the current datetime), it does not matter.
182
183|project_name| always does full single-pass backups, so it will start again
184from the beginning - but it will be much faster, because some of the data was
185already stored into the repo (and is still referenced by the checkpoint
186archive), so it does not need to get transmitted and stored again.
187
188Once your backup has finished successfully, you can delete all
189``<archive-name>.checkpoint`` archives. If you run ``borg prune``, it will
190also care for deleting unneeded checkpoints.
191
192Note: the checkpointing mechanism creates hidden, partial files in an archive,
193so that checkpoints even work while a big file is being processed.
194They are named ``<filename>.borg_part_<N>`` and all operations usually ignore
195these files, but you can make them considered by giving the option
196``--consider-part-files``. You usually only need that option if you are
197really desperate (e.g. if you have no completed backup of that file and you'ld
198rather get a partial file extracted than nothing). You do **not** want to give
199that option under any normal circumstances.
200
201Note that checkpoints inside files are created only since version 1.1, make
202sure you have an up-to-date version of borgbackup if you want to continue
203instead of retransferring a huge file. In some cases, there is only an outdated
204version shipped with your distribution (e.g. Debian). See :ref:`installation`.
205
206How can I backup huge file(s) over a unstable connection?
207---------------------------------------------------------
208
209This is not a problem anymore.
210
211For more details, see :ref:`checkpoints_parts`.
212
213How can I switch append-only mode on and off?
214-----------------------------------------------------------------------------------------------------------------------------------
215
216You could do that (via borg config REPO append_only 0/1), but using different
217ssh keys and different entries in ``authorized_keys`` is much easier and also
218maybe has less potential of things going wrong somehow.
219
220
221My machine goes to sleep causing `Broken pipe`
222----------------------------------------------
223
224When backing up your data over the network, your machine should not go to sleep.
225On macOS you can use `caffeinate` to avoid that.
226
227How can I switch append-only mode on and off?
228-----------------------------------------------------------------------------------------------------------------------------------
229
230You could do that (via borg config REPO append_only 0/1), but using different
231ssh keys and different entries in ``authorized_keys`` is much easier and also
232maybe has less potential of things going wrong somehow.
233
234How can I compare contents of an archive to my local filesystem?
235-----------------------------------------------------------------
236
237You can instruct ``export-tar`` to send a tar stream to the stdout, and
238then use ``tar`` to perform the comparison:
239
240::
241
242    borg export-tar /path/to/repo::archive-name - | tar --compare -f - -C /path/to/compare/to
243
244
245.. _faq_corrupt_repo:
246
247My machine goes to sleep causing `Broken pipe`
248----------------------------------------------
249
250When backing up your data over the network, your machine should not go to sleep.
251On macOS you can use `caffeinate` to avoid that.
252
253How can I restore huge file(s) over an unstable connection?
254-----------------------------------------------------------
255
256If you cannot manage to extract the whole big file in one go, you can extract
257all the part files and manually concatenate them together.
258
259For more details, see :ref:`checkpoints_parts`.
260
261Can |project_name| add redundancy to the backup data to deal with hardware malfunction?
262---------------------------------------------------------------------------------------
263
264No, it can't. While that at first sounds like a good idea to defend against
265some defect HDD sectors or SSD flash blocks, dealing with this in a
266reliable way needs a lot of low-level storage layout information and
267control which we do not have (and also can't get, even if we wanted).
268
269So, if you need that, consider RAID or a filesystem that offers redundant
270storage or just make backups to different locations / different hardware.
271
272See also :issue:`225`.
273
274Can |project_name| verify data integrity of a backup archive?
275-------------------------------------------------------------
276
277Yes, if you want to detect accidental data damage (like bit rot), use the
278``check`` operation. It will notice corruption using CRCs and hashes.
279If you want to be able to detect malicious tampering also, use an encrypted
280repo. It will then be able to check using CRCs and HMACs.
281
282Can I use Borg on SMR hard drives?
283----------------------------------
284
285SMR (shingled magnetic recording) hard drives are very different from
286regular hard drives. Applications have to behave in certain ways or
287performance will be heavily degraded.
288
289Borg 1.1 ships with default settings suitable for SMR drives,
290and has been successfully tested on *Seagate Archive v2* drives
291using the ext4 file system.
292
293Some Linux kernel versions between 3.19 and 4.5 had various bugs
294handling device-managed SMR drives, leading to IO errors, unresponsive
295drives and unreliable operation in general.
296
297For more details, refer to :issue:`2252`.
298
299.. _faq-integrityerror:
300
301I get an IntegrityError or similar - what now?
302----------------------------------------------
303
304A single error does not necessarily indicate bad hardware or a Borg
305bug. All hardware exhibits a bit error rate (BER). Hard drives are typically
306specified as exhibiting fewer than one error every 12 to 120 TB
307(one bit error in 10e14 to 10e15 bits). The specification is often called
308*unrecoverable read error rate* (URE rate).
309
310Apart from these very rare errors there are two main causes of errors:
311
312(i) Defective hardware: described below.
313(ii) Bugs in software (Borg, operating system, libraries):
314     Ensure software is up to date.
315     Check whether the issue is caused by any fixed bugs described in :ref:`important_notes`.
316
317
318.. rubric:: Finding defective hardware
319
320.. note::
321
322   Hardware diagnostics are operating system dependent and do not
323   apply universally. The commands shown apply for popular Unix-like
324   systems. Refer to your operating system's manual.
325
326Checking hard drives
327  Find the drive containing the repository and use *findmnt*, *mount* or *lsblk*
328  to learn the device path (typically */dev/...*) of the drive.
329  Then, smartmontools can retrieve self-diagnostics of the drive in question::
330
331      # smartctl -a /dev/sdSomething
332
333  The *Offline_Uncorrectable*, *Current_Pending_Sector* and *Reported_Uncorrect*
334  attributes indicate data corruption. A high *UDMA_CRC_Error_Count* usually
335  indicates a bad cable.
336
337  I/O errors logged by the system (refer to the system journal or
338  dmesg) can point to issues as well. I/O errors only affecting the
339  file system easily go unnoticed, since they are not reported to
340  applications (e.g. Borg), while these errors can still corrupt data.
341
342  Drives can corrupt some sectors in one event, while remaining
343  reliable otherwise. Conversely, drives can fail completely with no
344  advance warning. If in doubt, copy all data from the drive in
345  question to another drive -- just in case it fails completely.
346
347  If any of these are suspicious, a self-test is recommended::
348
349      # smartctl -t long /dev/sdSomething
350
351  Running ``fsck`` if not done already might yield further insights.
352
353Checking memory
354  Intermittent issues, such as ``borg check`` finding errors
355  inconsistently between runs, are frequently caused by bad memory.
356
357  Run memtest86+ (or an equivalent memory tester) to verify that
358  the memory subsystem is operating correctly.
359
360Checking processors
361  Processors rarely cause errors. If they do, they are usually overclocked
362  or otherwise operated outside their specifications. We do not recommend to
363  operate hardware outside its specifications for productive use.
364
365  Tools to verify correct processor operation include Prime95 (mprime), linpack,
366  and the `Intel Processor Diagnostic Tool
367  <https://downloadcenter.intel.com/download/19792/Intel-Processor-Diagnostic-Tool>`_
368  (applies only to Intel processors).
369
370.. rubric:: Repairing a damaged repository
371
372With any defective hardware found and replaced, the damage done to the repository
373needs to be ascertained and fixed.
374
375:ref:`borg_check` provides diagnostics and ``--repair`` options for repositories with
376issues. We recommend to first run without ``--repair`` to assess the situation.
377If the found issues and proposed repairs seem right, re-run "check" with ``--repair`` enabled.
378
379How probable is it to get a hash collision problem?
380---------------------------------------------------
381
382If you noticed, there are some issues (:issue:`170` (**warning: hell**) and :issue:`4884`)
383about the probability of a chunk having the same hash as another chunk, making the file
384corrupted because it grabbed the wrong chunk. This is called the `Birthday Problem
385<https://en.wikipedia.org/wiki/Birthday_problem>`_.
386
387There is a lot of probability in here so, I can give you my interpretation of
388such math but it's honestly better that you read it yourself and grab your own
389resolution from that.
390
391Assuming that all your chunks have a size of :math:`2^{21}` bytes (approximately 2.1 MB)
392and we have a "perfect" hash algorithm, we can think that the probability of collision
393would be of :math:`p^2/2^{n+1}` then, using SHA-256 (:math:`n=256`) and for example
394we have 1000 million chunks (:math:`p=10^9`) (1000 million chunks would be about 2100TB).
395The probability would be around to 0.0000000000000000000000000000000000000000000000000000000000043.
396
397A mass-murderer space rock happens about once every 30 million years on average.
398This leads to a probability of such an event occurring in the next second to about :math:`10^{-15}`.
399That's **45** orders of magnitude more probable than the SHA-256 collision. Briefly stated,
400if you find SHA-256 collisions scary then your priorities are wrong. This example was grabbed from
401`this SO answer <https://stackoverflow.com/a/4014407/13359375>`_, it's great honestly.
402
403Still, the real question is if Borg tries to not make this happen?
404
405Well... it used to not check anything but there was a feature added which saves the size
406of the chunks too, so the size of the chunks is compared to the size that you got with the
407hash and if the check says there is a mismatch it will raise an exception instead of corrupting
408the file. This doesn't save us from everything but reduces the chances of corruption.
409There are other ways of trying to escape this but it would affect performance so much that
410it wouldn't be worth it and it would contradict Borg's design, so if you don't want this to
411happen, simply don't use Borg.
412
413Why is the time elapsed in the archive stats different from wall clock time?
414----------------------------------------------------------------------------
415
416Borg needs to write the time elapsed into the archive metadata before finalizing
417the archive, compacting the segments, and committing the repo & cache. This means
418when Borg is run with e.g. the ``time`` command, the duration shown in the archive
419stats may be shorter than the full time the command runs for.
420
421How do I configure different prune policies for different directories?
422----------------------------------------------------------------------
423
424Say you want to prune ``/var/log`` faster than the rest of
425``/``. How do we implement that? The answer is to backup to different
426archive *names* and then implement different prune policies for
427different prefixes. For example, you could have a script that does::
428
429    borg create --exclude /var/log $REPOSITORY:main-$(date +%Y-%m-%d) /
430    borg create $REPOSITORY:logs-$(date +%Y-%m-%d) /var/log
431
432Then you would have two different prune calls with different policies::
433
434    borg prune --verbose --list -d 30 --prefix main- "$REPOSITORY"
435    borg prune --verbose --list -d 7  --prefix logs- "$REPOSITORY"
436
437This will keep 7 days of logs and 30 days of everything else. Borg 1.1
438also supports the ``--glob-archives`` parameter.
439
440How do I remove files from an existing backup?
441----------------------------------------------
442
443Say you now want to remove old logfiles because you changed your
444backup policy as described above. The only way to do this is to use
445the :ref:`borg_recreate` command to rewrite all archives with a
446different ``--exclude`` pattern. See the examples in the
447:ref:`borg_recreate` manpage for more information.
448
449Can I safely change the compression level or algorithm?
450--------------------------------------------------------
451
452The compression level and algorithm don't affect deduplication. Chunk ID hashes
453are calculated *before* compression. New compression settings
454will only be applied to new chunks, not existing chunks. So it's safe
455to change them.
456
457
458Security
459########
460
461.. _borg_security_critique:
462
463Isn't BorgBackup's AES-CTR crypto broken?
464-----------------------------------------
465
466If a nonce (counter) value is reused, AES-CTR mode crypto is broken.
467
468To exploit the AES counter management issue, an attacker would need to have
469access to the borg repository.
470
471By tampering with the repo, the attacker could bring the repo into a state so
472that it reports a lower "highest used counter value" than the one that actually
473was used. The client would usually notice that, because it rather trusts the
474clientside stored "highest used counter value" than trusting the server.
475
476But there are situations, where this is simply not possible:
477
478- If clients A and B used the repo, the client A can only know its own highest
479  CTR value, but not the one produced by B. That is only known to (B and) the
480  server (the repo) and thus the client A needs to trust the server about the
481  value produced by B in that situation. You can't do much about this except
482  not having multiple clients per repo.
483
484- Even if there is only one client, if client-side information is completely
485  lost (e.g. due to disk defect), the client also needs to trust the value from
486  server side. You can avoid this by not continuing to write to the repository
487  after you have lost clientside borg information.
488
489.. _home_config_borg:
490
491How important is the $HOME/.config/borg directory?
492--------------------------------------------------
493
494The Borg config directory has content that you should take care of:
495
496``security`` subdirectory
497  Each directory here represents one Borg repository by its ID and contains the last known status.
498  If a repository's status is different from this information at the beginning of BorgBackup
499  operation, Borg outputs warning messages and asks for confirmation, so make sure you do not lose
500  or manipulate these files. However, apart from those warnings, a loss of these files can be
501  recovered.
502
503``keys`` subdirectory
504  In this directory all your repository keyfiles are stored. You MUST make sure to have an
505  independent backup of these keyfiles, otherwise you cannot access your backups anymore if you lose
506  them. You also MUST keep these files secret; everyone who gains access to your repository and has
507  the corresponding keyfile (and the key passphrase) can extract it.
508
509Make sure that only you have access to the Borg config directory.
510
511.. _cache_security:
512
513Do I need to take security precautions regarding the cache?
514-----------------------------------------------------------
515
516The cache contains a lot of metadata information about the files in
517your repositories and it is not encrypted.
518
519However, the assumption is that the cache is being stored on the very
520same system which also contains the original files which are being
521backed up. So someone with access to the cache files would also have
522access the original files anyway.
523
524The Internals section contains more details about :ref:`cache`. If you ever need to move the cache
525to a different location, this can be achieved by using the appropriate :ref:`env_vars`.
526
527How can I specify the encryption passphrase programmatically?
528-------------------------------------------------------------
529
530There are several ways to specify a passphrase without human intervention:
531
532Setting ``BORG_PASSPHRASE``
533  The passphrase can be specified using the ``BORG_PASSPHRASE`` environment variable.
534  This is often the simplest option, but can be insecure if the script that sets it
535  is world-readable.
536
537  .. _password_env:
538  .. note:: Be careful how you set the environment; using the ``env``
539          command, a ``system()`` call or using inline shell scripts
540          (e.g. ``BORG_PASSPHRASE=hunter2 borg ...``)
541          might expose the credentials in the process list directly
542          and they will be readable to all users on a system. Using
543          ``export`` in a shell script file should be safe, however, as
544          the environment of a process is `accessible only to that
545          user
546          <https://security.stackexchange.com/questions/14000/environment-variable-accessibility-in-linux/14009#14009>`_.
547
548Using ``BORG_PASSCOMMAND`` with a properly permissioned file
549  Another option is to create a file with a password in it in your home
550  directory and use permissions to keep anyone else from reading it. For
551  example, first create a key::
552
553    head -c 32 /dev/urandom | base64 -w 0 > ~/.borg-passphrase
554    chmod 400 ~/.borg-passphrase
555
556  Then in an automated script one can put::
557
558    export BORG_PASSCOMMAND="cat $HOME/.borg-passphrase"
559
560  and Borg will automatically use that passphrase.
561
562Using keyfile-based encryption with a blank passphrase
563  It is possible to encrypt your repository in ``keyfile`` mode instead of the default
564  ``repokey`` mode and use a blank passphrase for the key file (simply press Enter twice
565  when ``borg init`` asks for the password). See :ref:`encrypted_repos`
566  for more details.
567
568Using ``BORG_PASSCOMMAND`` with macOS Keychain
569  macOS has a native manager for secrets (such as passphrases) which is safer
570  than just using a file as it is encrypted at rest and unlocked manually
571  (fortunately, the login keyring automatically unlocks when you login). With
572  the built-in ``security`` command, you can access it from the command line,
573  making it useful for ``BORG_PASSCOMMAND``.
574
575  First generate a passphrase and use ``security`` to save it to your login
576  (default) keychain::
577
578    security add-generic-password -D secret -U -a $USER -s borg-passphrase -w $(head -c 32 /dev/urandom | base64 -w 0)
579
580  In your backup script retrieve it in the ``BORG_PASSCOMMAND``::
581
582    export BORG_PASSCOMMAND="security find-generic-password -a $USER -s borg-passphrase -w"
583
584Using ``BORG_PASSCOMMAND`` with GNOME Keyring
585  GNOME also has a keyring daemon that can be used to store a Borg passphrase.
586  First ensure ``libsecret-tools``, ``gnome-keyring`` and ``libpam-gnome-keyring``
587  are installed. If ``libpam-gnome-keyring`` wasn't already installed, ensure it
588  runs on login::
589
590    sudo sh -c "echo session optional pam_gnome_keyring.so auto_start >> /etc/pam.d/login"
591    sudo sh -c "echo password optional pam_gnome_keyring.so >> /etc/pam.d/passwd"
592    # you may need to relogin afterwards to activate the login keyring
593
594  Then add a secret to the login keyring::
595
596    head -c 32 /dev/urandom | base64 -w 0 | secret-tool store borg-repository repo-name --label="Borg Passphrase"
597
598  If a dialog box pops up prompting you to pick a password for a new keychain, use your
599  login password. If there is a checkbox for automatically unlocking on login, check it
600  to allow backups without any user intervention whatsoever.
601
602  Once the secret is saved, retrieve it in a backup script using ``BORG_PASSCOMMAND``::
603
604    export BORG_PASSCOMMAND="secret-tool lookup borg-repository repo-name"
605
606  .. note:: For this to automatically unlock the keychain it must be run
607    in the ``dbus`` session of an unlocked terminal; for example, running a backup
608    script as a ``cron`` job might not work unless you also ``export DISPLAY=:0``
609    so ``secret-tool`` can pick up your open session. `It gets even more complicated`__
610    when you are running the tool as a different user (e.g. running a backup as root
611    with the password stored in the user keyring).
612
613__ https://github.com/borgbackup/borg/pull/2837#discussion_r127641330
614
615Using ``BORG_PASSCOMMAND`` with KWallet
616  KDE also has a keychain feature in the form of KWallet. The command-line tool
617  ``kwalletcli`` can be used to store and retrieve secrets. Ensure ``kwalletcli``
618  is installed, generate a passphrase, and store it in your "wallet"::
619
620    head -c 32 /dev/urandom | base64 -w 0 | kwalletcli -Pe borg-passphrase -f Passwords
621
622  Once the secret is saved, retrieve it in a backup script using ``BORG_PASSCOMMAND``::
623
624    export BORG_PASSCOMMAND="kwalletcli -e borg-passphrase -f Passwords"
625
626When backing up to remote encrypted repos, is encryption done locally?
627----------------------------------------------------------------------
628
629Yes, file and directory metadata and data is locally encrypted, before
630leaving the local machine. We do not mean the transport layer encryption
631by that, but the data/metadata itself. Transport layer encryption (e.g.
632when ssh is used as a transport) applies additionally.
633
634When backing up to remote servers, do I have to trust the remote server?
635------------------------------------------------------------------------
636
637Yes and No.
638
639No, as far as data confidentiality is concerned - if you use encryption,
640all your files/dirs data and metadata are stored in their encrypted form
641into the repository.
642
643Yes, as an attacker with access to the remote server could delete (or
644otherwise make unavailable) all your backups.
645
646How can I protect against a hacked backup client?
647-------------------------------------------------
648
649Assume you backup your backup client machine C to the backup server S and
650C gets hacked. In a simple push setup, the attacker could then use borg on
651C to delete all backups residing on S.
652
653These are your options to protect against that:
654
655- Do not allow to permanently delete data from the repo, see :ref:`append_only_mode`.
656- Use a pull-mode setup using ``ssh -R``, see :ref:`pull_backup` for more information.
657- Mount C's filesystem on another machine and then create a backup of it.
658- Do not give C filesystem-level access to S.
659
660See :ref:`hosting_repositories` for a detailed protection guide.
661
662How can I protect against a hacked backup server?
663-------------------------------------------------
664
665Just in case you got the impression that pull-mode backups are way more safe
666than push-mode, you also need to consider the case that your backup server S
667gets hacked. In case S has access to a lot of clients C, that might bring you
668into even bigger trouble than a hacked backup client in the previous FAQ entry.
669
670These are your options to protect against that:
671
672- Use the standard push-mode setup (see also previous FAQ entry).
673- Mount (the repo part of) S's filesystem on C.
674- Do not give S file-system level access to C.
675- Have your backup server at a well protected place (maybe not reachable from
676  the internet), configure it safely, apply security updates, monitor it, ...
677
678How can I protect against theft, sabotage, lightning, fire, ...?
679----------------------------------------------------------------
680
681In general: if your only backup medium is nearby the backupped machine and
682always connected, you can easily get into trouble: they likely share the same
683fate if something goes really wrong.
684
685Thus:
686
687- have multiple backup media
688- have media disconnected from network, power, computer
689- have media at another place
690- have a relatively recent backup on your media
691
692How do I report a security issue with Borg?
693-------------------------------------------
694
695Send a private email to the :ref:`security contact <security-contact>`
696if you think you have discovered a security issue.
697Please disclose security issues responsibly.
698
699Common issues
700#############
701
702Why does Borg extract hang after some time?
703-------------------------------------------
704
705When I do a ``borg extract``, after a while all activity stops, no cpu usage,
706no downloads.
707
708This may happen when the SSH connection is stuck on server side. You can
709configure SSH on client side to prevent this by sending keep-alive requests,
710for example in ~/.ssh/config:
711
712::
713
714    Host borg.example.com
715        # Client kills connection after 3*30 seconds without server response:
716        ServerAliveInterval 30
717        ServerAliveCountMax 3
718
719You can also do the opposite and configure SSH on server side in
720/etc/ssh/sshd_config, to make the server send keep-alive requests to the client:
721
722::
723
724    # Server kills connection after 3*30 seconds without client response:
725    ClientAliveInterval 30
726    ClientAliveCountMax 3
727
728How can I deal with my very unstable SSH connection?
729----------------------------------------------------
730
731If you have issues with lost connections during long-running borg commands, you
732could try to work around:
733
734- Make partial extracts like ``borg extract REPO PATTERN`` to do multiple
735  smaller extraction runs that complete before your connection has issues.
736- Try using ``borg mount REPO MOUNTPOINT`` and ``rsync -avH`` from
737  ``MOUNTPOINT`` to your desired extraction directory. If the connection breaks
738  down, just repeat that over and over again until rsync does not find anything
739  to do any more. Due to the way borg mount works, this might be less efficient
740  than borg extract for bigger volumes of data.
741
742Why do I get "connection closed by remote" after a while?
743---------------------------------------------------------
744
745When doing a backup to a remote server (using a ssh: repo URL), it sometimes
746stops after a while (some minutes, hours, ... - not immediately) with
747"connection closed by remote" error message. Why?
748
749That's a good question and we are trying to find a good answer in :issue:`636`.
750
751Why am I seeing idle borg serve processes on the repo server?
752-------------------------------------------------------------
753
754Maybe the ssh connection between client and server broke down and that was not
755yet noticed on the server. Try these settings:
756
757::
758
759    # /etc/ssh/sshd_config on borg repo server - kill connection to client
760    # after ClientAliveCountMax * ClientAliveInterval seconds with no response
761    ClientAliveInterval 20
762    ClientAliveCountMax 3
763
764If you have multiple borg create ... ; borg create ... commands in a already
765serialized way in a single script, you need to give them ``--lock-wait N`` (with N
766being a bit more than the time the server needs to terminate broken down
767connections and release the lock).
768
769.. _disable_archive_chunks:
770
771The borg cache eats way too much disk space, what can I do?
772-----------------------------------------------------------
773
774This may especially happen if borg needs to rebuild the local "chunks" index -
775either because it was removed, or because it was not coherent with the
776repository state any more (e.g. because another borg instance changed the
777repository).
778
779To optimize this rebuild process, borg caches per-archive information in the
780``chunks.archive.d/`` directory. It won't help the first time it happens, but it
781will make the subsequent rebuilds faster (because it needs to transfer less data
782from the repository). While being faster, the cache needs quite some disk space,
783which might be unwanted.
784
785There is a temporary (but maybe long lived) hack to avoid using lots of disk
786space for chunks.archive.d (see :issue:`235` for details):
787
788::
789
790    # this assumes you are working with the same user as the backup.
791    cd ~/.cache/borg/$(borg config /path/to/repo id)
792    rm -rf chunks.archive.d ; touch chunks.archive.d
793
794This deletes all the cached archive chunk indexes and replaces the directory
795that kept them with a file, so borg won't be able to store anything "in" there
796in future.
797
798This has some pros and cons, though:
799
800- much less disk space needs for ~/.cache/borg.
801- chunk cache resyncs will be slower as it will have to transfer chunk usage
802  metadata for all archives from the repository (which might be slow if your
803  repo connection is slow) and it will also have to build the hashtables from
804  that data.
805  chunk cache resyncs happen e.g. if your repo was written to by another
806  machine (if you share same backup repo between multiple machines) or if
807  your local chunks cache was lost somehow.
808
809The long term plan to improve this is called "borgception", see :issue:`474`.
810
811Can I backup my root partition (/) with Borg?
812---------------------------------------------
813
814Backing up your entire root partition works just fine, but remember to
815exclude directories that make no sense to backup, such as /dev, /proc,
816/sys, /tmp and /run, and to use ``--one-file-system`` if you only want to
817backup the root partition (and not any mounted devices e.g.).
818
819If it crashes with a UnicodeError, what can I do?
820-------------------------------------------------
821
822Check if your encoding is set correctly. For most POSIX-like systems, try::
823
824  export LANG=en_US.UTF-8  # or similar, important is correct charset
825
826I can't extract non-ascii filenames by giving them on the commandline!?
827-----------------------------------------------------------------------
828
829This might be due to different ways to represent some characters in unicode
830or due to other non-ascii encoding issues.
831
832If you run into that, try this:
833
834- avoid the non-ascii characters on the commandline by e.g. extracting
835  the parent directory (or even everything)
836- mount the repo using FUSE and use some file manager
837
838.. _expected_performance:
839
840What's the expected backup performance?
841---------------------------------------
842
843A first backup will usually be somehow "slow" because there is a lot of data
844to process. Performance here depends on a lot of factors, so it is hard to
845give specific numbers.
846
847Subsequent backups are usually very fast if most files are unchanged and only
848a few are new or modified. The high performance on unchanged files primarily depends
849only on a few factors (like fs recursion + metadata reading performance and the
850files cache working as expected) and much less on other factors.
851
852E.g., for this setup:
853
854- server grade machine (4C/8T 2013 Xeon, 64GB RAM, 2x good 7200RPM disks)
855- local zfs filesystem (mirrored) containing the backup source data
856- repository is remote (does not matter much for unchanged files)
857- backup job runs while machine is otherwise idle
858
859The observed performance is that |project_name| can process about
860**1 million unchanged files (and a few small changed ones) in 4 minutes!**
861
862If you are seeing much less than that in similar circumstances, read the next
863few FAQ entries below.
864
865.. _slow_backup:
866
867Why is backup slow for me?
868--------------------------
869
870So, if you feel your |project_name| backup is too slow somehow, you should find out why.
871
872The usual way to approach this is to add ``--list --filter=AME --stats`` to your
873``borg create`` call to produce more log output, including a file list (with file status
874characters) and also some statistics at the end of the backup.
875
876Then you do the backup and look at the log output:
877
878- stats: Do you really have little changes or are there more changes than you thought?
879  In the stats you can see the overall volume of changed data, which needed to be
880  added to the repo. If that is a lot, that can be the reason why it is slow.
881- ``A`` status ("added") in the file list:
882  If you see that often, you have a lot of new files (files that |project_name| did not find
883  in the files cache). If you think there is something wrong with that (the file was there
884  already in the previous backup), please read the FAQ entries below.
885- ``M`` status ("modified") in the file list:
886  If you see that often, |project_name| thinks that a lot of your files might be modified
887  (|project_name| found them in the files cache, but the metadata read from the filesystem did
888  not match the metadata stored in the files cache).
889  In such a case, |project_name| will need to process the files' contents completely, which is
890  much slower than processing unmodified files (|project_name| does not read their contents!).
891  The metadata values used in this comparison are determined by the ``--files-cache`` option
892  and could be e.g. size, ctime and inode number (see the ``borg create`` docs for more
893  details and potential issues).
894  You can use the ``stat`` command on files to manually look at fs metadata to debug if
895  there is any unexpected change triggering the ``M`` status.
896
897See also the next few FAQ entries for more details.
898
899.. _a_status_oddity:
900
901I am seeing 'A' (added) status for an unchanged file!?
902------------------------------------------------------
903
904The files cache is used to determine whether |project_name| already
905"knows" / has backed up a file and if so, to skip the file from
906chunking. It intentionally *excludes* files that have a timestamp
907which is the same as the newest timestamp in the created archive.
908
909So, if you see an 'A' status for unchanged file(s), they are likely the files
910with the most recent timestamp in that archive.
911
912This is expected: it is to avoid data loss with files that are backed up from
913a snapshot and that are immediately changed after the snapshot (but within
914timestamp granularity time, so the timestamp would not change). Without the code that
915removes these files from the files cache, the change that happened right after
916the snapshot would not be contained in the next backup as |project_name| would
917think the file is unchanged.
918
919This does not affect deduplication, the file will be chunked, but as the chunks
920will often be the same and already stored in the repo (except in the above
921mentioned rare condition), it will just re-use them as usual and not store new
922data chunks.
923
924If you want to avoid unnecessary chunking, just create or touch a small or
925empty file in your backup source file set (so that one has the latest timestamp,
926not your 50GB VM disk image) and, if you do snapshots, do the snapshot after
927that.
928
929Since only the files cache is used in the display of files status,
930those files are reported as being added when, really, chunks are
931already used.
932
933By default, ctime (change time) is used for the timestamps to have a rather
934safe change detection (see also the --files-cache option).
935
936Furthermore, pathnames recorded in files cache are always absolute, even if you specify
937source directories with relative pathname. If relative pathnames are stable, but absolute are
938not (for example if you mount a filesystem without stable mount points for each backup or
939if you are running the backup from a filesystem snapshot whose name is not stable), borg
940will assume that files are different and will report them as 'added', even though no new
941chunks will be actually recorded for them. To avoid this, you could bind mount your source
942directory in a directory with the stable path.
943
944.. _always_chunking:
945
946It always chunks all my files, even unchanged ones!
947---------------------------------------------------
948
949|project_name| maintains a files cache where it remembers the timestamp, size
950and inode of files. When |project_name| does a new backup and starts processing
951a file, it first looks whether the file has changed (compared to the values
952stored in the files cache). If the values are the same, the file is assumed
953unchanged and thus its contents won't get chunked (again).
954
955|project_name| can't keep an infinite history of files of course, thus entries
956in the files cache have a "maximum time to live" which is set via the
957environment variable BORG_FILES_CACHE_TTL (and defaults to 20).
958Every time you do a backup (on the same machine, using the same user), the
959cache entries' ttl values of files that were not "seen" are incremented by 1
960and if they reach BORG_FILES_CACHE_TTL, the entry is removed from the cache.
961
962So, for example, if you do daily backups of 26 different data sets A, B,
963C, ..., Z on one machine (using the default TTL), the files from A will be
964already forgotten when you repeat the same backups on the next day and it
965will be slow because it would chunk all the files each time. If you set
966BORG_FILES_CACHE_TTL to at least 26 (or maybe even a small multiple of that),
967it would be much faster.
968
969Another possible reason is that files don't always have the same path, for
970example if you mount a filesystem without stable mount points for each backup or if you are running the backup from a filesystem snapshot whose name is not stable.
971If the directory where you mount a filesystem is different every time,
972|project_name| assumes they are different files. This is true even if you backup these files with relative pathnames - borg uses full
973pathnames in files cache regardless.
974
975
976Is there a way to limit bandwidth with |project_name|?
977------------------------------------------------------
978
979To limit upload (i.e. :ref:`borg_create`) bandwidth, use the
980``--remote-ratelimit`` option.
981
982There is no built-in way to limit *download*
983(i.e. :ref:`borg_extract`) bandwidth, but limiting download bandwidth
984can be accomplished with pipeviewer_:
985
986Create a wrapper script:  /usr/local/bin/pv-wrapper
987
988::
989
990    #!/bin/sh
991        ## -q, --quiet              do not output any transfer information at all
992        ## -L, --rate-limit RATE    limit transfer to RATE bytes per second
993    RATE=307200
994    pv -q -L $RATE  | "$@"
995
996Add BORG_RSH environment variable to use pipeviewer wrapper script with ssh.
997
998::
999
1000    export BORG_RSH='/usr/local/bin/pv-wrapper ssh'
1001
1002Now |project_name| will be bandwidth limited. Nice thing about pv is that you can change rate-limit on the fly:
1003
1004::
1005
1006    pv -R $(pidof pv) -L 102400
1007
1008.. _pipeviewer: http://www.ivarch.com/programs/pv.shtml
1009
1010
1011How can I avoid unwanted base directories getting stored into archives?
1012-----------------------------------------------------------------------
1013
1014Possible use cases:
1015
1016- Another file system is mounted and you want to backup it with original paths.
1017- You have created a BTRFS snapshot in a ``/.snapshots`` directory for backup.
1018
1019To achieve this, run ``borg create`` within the mountpoint/snapshot directory:
1020
1021::
1022
1023    # Example: Some file system mounted in /mnt/rootfs.
1024    cd /mnt/rootfs
1025    borg create /path/to/repo::rootfs_backup .
1026
1027
1028I am having troubles with some network/FUSE/special filesystem, why?
1029--------------------------------------------------------------------
1030
1031|project_name| is doing nothing special in the filesystem, it only uses very
1032common and compatible operations (even the locking is just "mkdir").
1033
1034So, if you are encountering issues like slowness, corruption or malfunction
1035when using a specific filesystem, please try if you can reproduce the issues
1036with a local (non-network) and proven filesystem (like ext4 on Linux).
1037
1038If you can't reproduce the issue then, you maybe have found an issue within
1039the filesystem code you used (not with |project_name|). For this case, it is
1040recommended that you talk to the developers / support of the network fs and
1041maybe open an issue in their issue tracker. Do not file an issue in the
1042|project_name| issue tracker.
1043
1044If you can reproduce the issue with the proven filesystem, please file an
1045issue in the |project_name| issue tracker about that.
1046
1047
1048Why does running 'borg check --repair' warn about data loss?
1049------------------------------------------------------------
1050
1051Repair usually works for recovering data in a corrupted archive. However,
1052it's impossible to predict all modes of corruption. In some very rare
1053instances, such as malfunctioning storage hardware, additional repo
1054corruption may occur. If you can't afford to lose the repo, it's strongly
1055recommended that you perform repair on a copy of the repo.
1056
1057In other words, the warning is there to emphasize that |project_name|:
1058  - Will perform automated routines that modify your backup repository
1059  - Might not actually fix the problem you are experiencing
1060  - Might, in very rare cases, further corrupt your repository
1061
1062In the case of malfunctioning hardware, such as a drive or USB hub
1063corrupting data when read or written, it's best to diagnose and fix the
1064cause of the initial corruption before attempting to repair the repo. If
1065the corruption is caused by a one time event such as a power outage,
1066running `borg check --repair` will fix most problems.
1067
1068
1069Why isn't there more progress / ETA information displayed?
1070----------------------------------------------------------
1071
1072Some borg runs take quite a bit, so it would be nice to see a progress display,
1073maybe even including a ETA (expected time of "arrival" [here rather "completion"]).
1074
1075For some functionality, this can be done: if the total amount of work is more or
1076less known, we can display progress. So check if there is a ``--progress`` option.
1077
1078But sometimes, the total amount is unknown (e.g. for ``borg create`` we just do
1079a single pass over the filesystem, so we do not know the total file count or data
1080volume before reaching the end). Adding another pass just to determine that would
1081take additional time and could be incorrect, if the filesystem is changing.
1082
1083Even if the fs does not change and we knew count and size of all files, we still
1084could not compute the ``borg create`` ETA as we do not know the amount of changed
1085chunks, how the bandwidth of source and destination or system performance might
1086fluctuate.
1087
1088You see, trying to display ETA would be futile. The borg developers prefer to
1089rather not implement progress / ETA display than doing futile attempts.
1090
1091See also: https://xkcd.com/612/
1092
1093
1094Why am I getting 'Operation not permitted' errors when backing up on sshfs?
1095---------------------------------------------------------------------------
1096
1097By default, ``sshfs`` is not entirely POSIX-compliant when renaming files due to
1098a technicality in the SFTP protocol. Fortunately, it also provides a workaround_
1099to make it behave correctly::
1100
1101    sshfs -o workaround=rename user@host:dir /mnt/dir
1102
1103.. _workaround: https://unix.stackexchange.com/a/123236
1104
1105
1106Can I disable checking for free disk space?
1107-------------------------------------------
1108
1109In some cases, the free disk space of the target volume is reported incorrectly.
1110This can happen for CIFS- or FUSE shares. If you are sure that your target volume
1111will always have enough disk space, you can use the following workaround to disable
1112checking for free disk space::
1113
1114    borg config -- $REPO_LOCATION additional_free_space -2T
1115
1116How do I rename a repository?
1117-----------------------------
1118
1119There is nothing special that needs to be done, you can simply rename the
1120directory that corresponds to the repository. However, the next time borg
1121interacts with the repository (i.e, via ``borg list``), depending on the value
1122of ``BORG_RELOCATED_REPO_ACCESS_IS_OK``, borg may warn you that the repository
1123has been moved. You will be given a prompt to confirm you are OK with this.
1124
1125If ``BORG_RELOCATED_REPO_ACCESS_IS_OK`` is unset, borg will interactively ask for
1126each repository whether it's OK.
1127
1128It may be useful to set ``BORG_RELOCATED_REPO_ACCESS_IS_OK=yes`` to avoid the
1129prompts when renaming multiple repositories or in a non-interactive context
1130such as a script. See :doc:`deployment` for an example.
1131
1132
1133Miscellaneous
1134#############
1135
1136Requirements for the borg single-file binary, esp. (g)libc?
1137-----------------------------------------------------------
1138
1139We try to build the binary on old, but still supported systems - to keep the
1140minimum requirement for the (g)libc low. The (g)libc can't be bundled into
1141the binary as it needs to fit your kernel and OS, but Python and all other
1142required libraries will be bundled into the binary.
1143
1144If your system fulfills the minimum (g)libc requirement (see the README that
1145is released with the binary), there should be no problem. If you are slightly
1146below the required version, maybe just try. Due to the dynamic loading (or not
1147loading) of some shared libraries, it might still work depending on what
1148libraries are actually loaded and used.
1149
1150In the borg git repository, there is scripts/glibc_check.py that can determine
1151(based on the symbols' versions they want to link to) whether a set of given
1152(Linux) binaries works with a given glibc version.
1153
1154
1155Why was Borg forked from Attic?
1156-------------------------------
1157
1158Borg was created in May 2015 in response to the difficulty of getting new
1159code or larger changes incorporated into Attic and establishing a bigger
1160developer community / more open development.
1161
1162More details can be found in `ticket 217
1163<https://github.com/jborg/attic/issues/217>`_ that led to the fork.
1164
1165Borg intends to be:
1166
1167* simple:
1168
1169  * as simple as possible, but no simpler
1170  * do the right thing by default, but offer options
1171* open:
1172
1173  * welcome feature requests
1174  * accept pull requests of good quality and coding style
1175  * give feedback on PRs that can't be accepted "as is"
1176  * discuss openly, don't work in the dark
1177* changing:
1178
1179  * Borg is not compatible with Attic
1180  * do not break compatibility accidentally, without a good reason
1181    or without warning. allow compatibility breaking for other cases.
1182  * if major version number changes, it may have incompatible changes
1183
1184Migrating from Attic
1185####################
1186
1187What are the differences between Attic and Borg?
1188------------------------------------------------
1189
1190Borg is a fork of `Attic`_ and maintained by "`The Borg collective`_".
1191
1192.. _Attic: https://github.com/jborg/attic
1193.. _The Borg collective: https://borgbackup.readthedocs.org/en/latest/authors.html
1194
1195Here's a (incomplete) list of some major changes:
1196
1197* lots of attic issues fixed (see `issue #5 <https://github.com/borgbackup/borg/issues/5>`_),
1198  including critical data corruption bugs and security issues.
1199* more open, faster paced development (see `issue #1 <https://github.com/borgbackup/borg/issues/1>`_)
1200* less chunk management overhead (less memory and disk usage for chunks index)
1201* faster remote cache resync (useful when backing up multiple machines into same repo)
1202* compression: no, lz4, zstd, zlib or lzma compression, adjustable compression levels
1203* repokey replaces problematic passphrase mode (you can't change the passphrase nor the pbkdf2 iteration count in "passphrase" mode)
1204* simple sparse file support, great for virtual machine disk files
1205* can read special files (e.g. block devices) or from stdin, write to stdout
1206* mkdir-based locking is more compatible than attic's posix locking
1207* uses fadvise to not spoil / blow up the fs cache
1208* better error messages / exception handling
1209* better logging, screen output, progress indication
1210* tested on misc. Linux systems, 32 and 64bit, FreeBSD, OpenBSD, NetBSD, macOS
1211
1212Please read the :ref:`changelog` (or ``docs/changes.rst`` in the source distribution) for more
1213information.
1214
1215Borg is not compatible with original Attic (but there is a one-way conversion).
1216
1217How do I migrate from Attic to Borg?
1218------------------------------------
1219
1220Use :ref:`borg_upgrade`. This is a one-way process that cannot be reversed.
1221
1222There are some caveats:
1223
1224- The upgrade can only be performed on local repositories.
1225  It cannot be performed on remote repositories.
1226
1227- If the repository is in "keyfile" encryption mode, the keyfile must
1228  exist locally or it must be manually moved after performing the upgrade:
1229
1230  1. Get the repository ID with ``borg config /path/to/repo id``.
1231  2. Locate the attic key file at ``~/.attic/keys/``. The correct key for the
1232     repository starts with the line ``ATTIC_KEY <repository id>``.
1233  3. Copy the attic key file to ``~/.config/borg/keys/``
1234  4. Change the first line from ``ATTIC_KEY ...`` to ``BORG_KEY ...``.
1235  5. Verify that the repository is now accessible (e.g. ``borg list <repository>``).
1236- Attic and Borg use different :ref:`"chunker params" <chunker-params>`.
1237  This means that data added by Borg won't deduplicate with the existing data
1238  stored by Attic. The effect is lessened if the files cache is used with Borg.
1239- Repositories in "passphrase" mode *must* be migrated to "repokey" mode using
1240  :ref:`borg_key_migrate-to-repokey`. Borg does not support the "passphrase" mode
1241  any other way.
1242
1243Why is my backup bigger than with attic?
1244----------------------------------------
1245
1246Attic was rather unflexible when it comes to compression, it always
1247compressed using zlib level 6 (no way to switch compression off or
1248adjust the level or algorithm).
1249
1250The default in Borg is lz4, which is fast enough to not use significant CPU time
1251in most cases, but can only achieve modest compression. It still compresses
1252easily compressed data fairly well.
1253
1254Borg also offers zstd, zlib and lzma compression, choose wisely.
1255
1256Which choice is the best option depends on a number of factors, like
1257bandwidth to the repository, how well the data compresses, available CPU
1258power and so on.
1259