xref: /netbsd/sbin/raidctl/raidctl.8 (revision 6550d01e)
1.\"     $NetBSD: raidctl.8,v 1.61 2010/01/27 09:26:16 wiz Exp $
2.\"
3.\" Copyright (c) 1998, 2002 The NetBSD Foundation, Inc.
4.\" All rights reserved.
5.\"
6.\" This code is derived from software contributed to The NetBSD Foundation
7.\" by Greg Oster
8.\"
9.\" Redistribution and use in source and binary forms, with or without
10.\" modification, are permitted provided that the following conditions
11.\" are met:
12.\" 1. Redistributions of source code must retain the above copyright
13.\"    notice, this list of conditions and the following disclaimer.
14.\" 2. Redistributions in binary form must reproduce the above copyright
15.\"    notice, this list of conditions and the following disclaimer in the
16.\"    documentation and/or other materials provided with the distribution.
17.\"
18.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
19.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
20.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
21.\" PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
22.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
23.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
24.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
25.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
26.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
27.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
28.\" POSSIBILITY OF SUCH DAMAGE.
29.\"
30.\"
31.\" Copyright (c) 1995 Carnegie-Mellon University.
32.\" All rights reserved.
33.\"
34.\" Author: Mark Holland
35.\"
36.\" Permission to use, copy, modify and distribute this software and
37.\" its documentation is hereby granted, provided that both the copyright
38.\" notice and this permission notice appear in all copies of the
39.\" software, derivative works or modified versions, and any portions
40.\" thereof, and that both notices appear in supporting documentation.
41.\"
42.\" CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
43.\" CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
44.\" FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
45.\"
46.\" Carnegie Mellon requests users of this software to return to
47.\"
48.\"  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
49.\"  School of Computer Science
50.\"  Carnegie Mellon University
51.\"  Pittsburgh PA 15213-3890
52.\"
53.\" any improvements or extensions that they make and grant Carnegie the
54.\" rights to redistribute these changes.
55.\"
56.Dd January 27, 2010
57.Dt RAIDCTL 8
58.Os
59.Sh NAME
60.Nm raidctl
61.Nd configuration utility for the RAIDframe disk driver
62.Sh SYNOPSIS
63.Nm
64.Op Fl v
65.Fl a Ar component Ar dev
66.Nm
67.Op Fl v
68.Fl A Op yes | no | root
69.Ar dev
70.Nm
71.Op Fl v
72.Fl B Ar dev
73.Nm
74.Op Fl v
75.Fl c Ar config_file Ar dev
76.Nm
77.Op Fl v
78.Fl C Ar config_file Ar dev
79.Nm
80.Op Fl v
81.Fl f Ar component Ar dev
82.Nm
83.Op Fl v
84.Fl F Ar component Ar dev
85.Nm
86.Op Fl v
87.Fl g Ar component Ar dev
88.Nm
89.Op Fl v
90.Fl G Ar dev
91.Nm
92.Op Fl v
93.Fl i Ar dev
94.Nm
95.Op Fl v
96.Fl I Ar serial_number Ar dev
97.Nm
98.Op Fl v
99.Fl m Ar dev
100.Nm
101.Op Fl v
102.Fl M
103.Oo yes | no | set
104.Ar params
105.Oc
106.Ar dev
107.Nm
108.Op Fl v
109.Fl p Ar dev
110.Nm
111.Op Fl v
112.Fl P Ar dev
113.Nm
114.Op Fl v
115.Fl r Ar component Ar dev
116.Nm
117.Op Fl v
118.Fl R Ar component Ar dev
119.Nm
120.Op Fl v
121.Fl s Ar dev
122.Nm
123.Op Fl v
124.Fl S Ar dev
125.Nm
126.Op Fl v
127.Fl u Ar dev
128.Sh DESCRIPTION
129.Nm
130is the user-land control program for
131.Xr raid 4 ,
132the RAIDframe disk device.
133.Nm
134is primarily used to dynamically configure and unconfigure RAIDframe disk
135devices.
136For more information about the RAIDframe disk device, see
137.Xr raid 4 .
138.Pp
139This document assumes the reader has at least rudimentary knowledge of
140RAID and RAID concepts.
141.Pp
142The command-line options for
143.Nm
144are as follows:
145.Bl -tag -width indent
146.It Fl a Ar component Ar dev
147Add
148.Ar component
149as a hot spare for the device
150.Ar dev .
151Component labels (which identify the location of a given
152component within a particular RAID set) are automatically added to the
153hot spare after it has been used and are not required for
154.Ar component
155before it is used.
156.It Fl A Ic yes Ar dev
157Make the RAID set auto-configurable.
158The RAID set will be automatically configured at boot
159.Ar before
160the root file system is mounted.
161Note that all components of the set must be of type
162.Dv RAID
163in the disklabel.
164.It Fl A Ic no Ar dev
165Turn off auto-configuration for the RAID set.
166.It Fl A Ic root Ar dev
167Make the RAID set auto-configurable, and also mark the set as being
168eligible to be the root partition.
169A RAID set configured this way will
170.Ar override
171the use of the boot disk as the root device.
172All components of the set must be of type
173.Dv RAID
174in the disklabel.
175Note that only certain architectures
176.Pq currently alpha, i386, pmax, sparc, sparc64, and vax
177support booting a kernel directly from a RAID set.
178.It Fl B Ar dev
179Initiate a copyback of reconstructed data from a spare disk to
180its original disk.
181This is performed after a component has failed,
182and the failed drive has been reconstructed onto a spare drive.
183.It Fl c Ar config_file Ar dev
184Configure the RAIDframe device
185.Ar dev
186according to the configuration given in
187.Ar config_file .
188A description of the contents of
189.Ar config_file
190is given later.
191.It Fl C Ar config_file Ar dev
192As for
193.Fl c ,
194but forces the configuration to take place.
195Fatal errors due to uninitialized components are ignored.
196This is required the first time a RAID set is configured.
197.It Fl f Ar component Ar dev
198This marks the specified
199.Ar component
200as having failed, but does not initiate a reconstruction of that component.
201.It Fl F Ar component Ar dev
202Fails the specified
203.Ar component
204of the device, and immediately begin a reconstruction of the failed
205disk onto an available hot spare.
206This is one of the mechanisms used to start
207the reconstruction process if a component does have a hardware failure.
208.It Fl g Ar component Ar dev
209Get the component label for the specified component.
210.It Fl G Ar dev
211Generate the configuration of the RAIDframe device in a format suitable for
212use with the
213.Fl c
214or
215.Fl C
216options.
217.It Fl i Ar dev
218Initialize the RAID device.
219In particular, (re-)write the parity on the selected device.
220This
221.Em MUST
222be done for
223.Em all
224RAID sets before the RAID device is labeled and before
225file systems are created on the RAID device.
226.It Fl I Ar serial_number Ar dev
227Initialize the component labels on each component of the device.
228.Ar serial_number
229is used as one of the keys in determining whether a
230particular set of components belong to the same RAID set.
231While not strictly enforced, different serial numbers should be used for
232different RAID sets.
233This step
234.Em MUST
235be performed when a new RAID set is created.
236.It Fl m Ar dev
237Display status information about the parity map on the RAID set, if any.
238If used with
239.Fl v
240then the current contents of the parity map will be output (in
241hexadecimal format) as well.
242.It Fl M Ic yes Ar dev
243.\"XXX should there be a section with more info on the parity map feature?
244Enable the use of a parity map on the RAID set; this is the default,
245and greatly reduces the time taken to check parity after unclean
246shutdowns at the cost of some very slight overhead during normal
247operation.
248Changes to this setting will take effect the next time the set is
249configured.
250Note that RAID-0 sets, having no parity, will not use a parity map in
251any case.
252.It Fl M Ic no Ar dev
253Disable the use of a parity map on the RAID set; doing this is not
254recommended.
255This will take effect the next time the set is configured.
256.It Fl M Ic set Ar cooldown Ar tickms Ar regions Ar dev
257Alter the parameters of the parity map; parameters to leave unchanged
258can be given as 0, and trailing zeroes may be omitted.
259.\"XXX should this explanation be deferred to another section as well?
260The RAID set is divided into
261.Ar regions
262regions; each region is marked dirty for at most
263.Ar cooldown
264intervals of
265.Ar tickms
266milliseconds each after a write to it, and at least
267.Ar cooldown
268\- 1 such intervals.
269Changes to
270.Ar regions
271take effect the next time is configured, while changes to the other
272parameters are applied immediately.
273The default parameters are expected to be reasonable for most workloads.
274.It Fl p Ar dev
275Check the status of the parity on the RAID set.
276Displays a status message,
277and returns successfully if the parity is up-to-date.
278.It Fl P Ar dev
279Check the status of the parity on the RAID set, and initialize
280(re-write) the parity if the parity is not known to be up-to-date.
281This is normally used after a system crash (and before a
282.Xr fsck 8 )
283to ensure the integrity of the parity.
284.It Fl r Ar component Ar dev
285Remove the spare disk specified by
286.Ar component
287from the set of available spare components.
288.It Fl R Ar component Ar dev
289Fails the specified
290.Ar component ,
291if necessary, and immediately begins a reconstruction back to
292.Ar component .
293This is useful for reconstructing back onto a component after
294it has been replaced following a failure.
295.It Fl s Ar dev
296Display the status of the RAIDframe device for each of the components
297and spares.
298.It Fl S Ar dev
299Check the status of parity re-writing, component reconstruction, and
300component copyback.
301The output indicates the amount of progress
302achieved in each of these areas.
303.It Fl u Ar dev
304Unconfigure the RAIDframe device.
305This does not remove any component labels or change any configuration
306settings (e.g. auto-configuration settings) for the RAID set.
307.It Fl v
308Be more verbose.
309For operations such as reconstructions, parity
310re-writing, and copybacks, provide a progress indicator.
311.El
312.Pp
313The device used by
314.Nm
315is specified by
316.Ar dev .
317.Ar dev
318may be either the full name of the device, e.g.,
319.Pa /dev/rraid0d ,
320for the i386 architecture, or
321.Pa /dev/rraid0c
322for many others, or just simply
323.Pa raid0
324(for
325.Pa /dev/rraid0[cd] ) .
326It is recommended that the partitions used to represent the
327RAID device are not used for file systems.
328.Ss Configuration file
329The format of the configuration file is complex, and
330only an abbreviated treatment is given here.
331In the configuration files, a
332.Sq #
333indicates the beginning of a comment.
334.Pp
335There are 4 required sections of a configuration file, and 2
336optional sections.
337Each section begins with a
338.Sq START ,
339followed by the section name,
340and the configuration parameters associated with that section.
341The first section is the
342.Sq array
343section, and it specifies
344the number of rows, columns, and spare disks in the RAID set.
345For example:
346.Bd -literal -offset indent
347START array
3481 3 0
349.Ed
350.Pp
351indicates an array with 1 row, 3 columns, and 0 spare disks.
352Note that although multi-dimensional arrays may be specified, they are
353.Em NOT
354supported in the driver.
355.Pp
356The second section, the
357.Sq disks
358section, specifies the actual components of the device.
359For example:
360.Bd -literal -offset indent
361START disks
362/dev/sd0e
363/dev/sd1e
364/dev/sd2e
365.Ed
366.Pp
367specifies the three component disks to be used in the RAID device.
368If any of the specified drives cannot be found when the RAID device is
369configured, then they will be marked as
370.Sq failed ,
371and the system will operate in degraded mode.
372Note that it is
373.Em imperative
374that the order of the components in the configuration file does not
375change between configurations of a RAID device.
376Changing the order of the components will result in data loss
377if the set is configured with the
378.Fl C
379option.
380In normal circumstances, the RAID set will not configure if only
381.Fl c
382is specified, and the components are out-of-order.
383.Pp
384The next section, which is the
385.Sq spare
386section, is optional, and, if present, specifies the devices to be used as
387.Sq hot spares
388\(em devices which are on-line,
389but are not actively used by the RAID driver unless
390one of the main components fail.
391A simple
392.Sq spare
393section might be:
394.Bd -literal -offset indent
395START spare
396/dev/sd3e
397.Ed
398.Pp
399for a configuration with a single spare component.
400If no spare drives are to be used in the configuration, then the
401.Sq spare
402section may be omitted.
403.Pp
404The next section is the
405.Sq layout
406section.
407This section describes the general layout parameters for the RAID device,
408and provides such information as
409sectors per stripe unit,
410stripe units per parity unit,
411stripe units per reconstruction unit,
412and the parity configuration to use.
413This section might look like:
414.Bd -literal -offset indent
415START layout
416# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
41732 1 1 5
418.Ed
419.Pp
420The sectors per stripe unit specifies, in blocks, the interleave
421factor; i.e., the number of contiguous sectors to be written to each
422component for a single stripe.
423Appropriate selection of this value (32 in this example)
424is the subject of much research in RAID architectures.
425The stripe units per parity unit and
426stripe units per reconstruction unit are normally each set to 1.
427While certain values above 1 are permitted, a discussion of valid
428values and the consequences of using anything other than 1 are outside
429the scope of this document.
430The last value in this section (5 in this example)
431indicates the parity configuration desired.
432Valid entries include:
433.Bl -tag -width inde
434.It 0
435RAID level 0.
436No parity, only simple striping.
437.It 1
438RAID level 1.
439Mirroring.
440The parity is the mirror.
441.It 4
442RAID level 4.
443Striping across components, with parity stored on the last component.
444.It 5
445RAID level 5.
446Striping across components, parity distributed across all components.
447.El
448.Pp
449There are other valid entries here, including those for Even-Odd
450parity, RAID level 5 with rotated sparing, Chained declustering,
451and Interleaved declustering, but as of this writing the code for
452those parity operations has not been tested with
453.Nx .
454.Pp
455The next required section is the
456.Sq queue
457section.
458This is most often specified as:
459.Bd -literal -offset indent
460START queue
461fifo 100
462.Ed
463.Pp
464where the queuing method is specified as fifo (first-in, first-out),
465and the size of the per-component queue is limited to 100 requests.
466Other queuing methods may also be specified, but a discussion of them
467is beyond the scope of this document.
468.Pp
469The final section, the
470.Sq debug
471section, is optional.
472For more details on this the reader is referred to
473the RAIDframe documentation discussed in the
474.Sx HISTORY
475section.
476.Pp
477See
478.Sx EXAMPLES
479for a more complete configuration file example.
480.Sh FILES
481.Bl -tag -width /dev/XXrXraidX -compact
482.It Pa /dev/{,r}raid*
483.Cm raid
484device special files.
485.El
486.Sh EXAMPLES
487It is highly recommended that before using the RAID driver for real
488file systems that the system administrator(s) become quite familiar
489with the use of
490.Nm ,
491and that they understand how the component reconstruction process works.
492The examples in this section will focus on configuring a
493number of different RAID sets of varying degrees of redundancy.
494By working through these examples, administrators should be able to
495develop a good feel for how to configure a RAID set, and how to
496initiate reconstruction of failed components.
497.Pp
498In the following examples
499.Sq raid0
500will be used to denote the RAID device.
501Depending on the architecture,
502.Pa /dev/rraid0c
503or
504.Pa /dev/rraid0d
505may be used in place of
506.Pa raid0 .
507.Ss Initialization and Configuration
508The initial step in configuring a RAID set is to identify the components
509that will be used in the RAID set.
510All components should be the same size.
511Each component should have a disklabel type of
512.Dv FS_RAID ,
513and a typical disklabel entry for a RAID component might look like:
514.Bd -literal -offset indent
515f:  1800000  200495     RAID              # (Cyl.  405*- 4041*)
516.Ed
517.Pp
518While
519.Dv FS_BSDFFS
520will also work as the component type, the type
521.Dv FS_RAID
522is preferred for RAIDframe use, as it is required for features such as
523auto-configuration.
524As part of the initial configuration of each RAID set,
525each component will be given a
526.Sq component label .
527A
528.Sq component label
529contains important information about the component, including a
530user-specified serial number, the row and column of that component in
531the RAID set, the redundancy level of the RAID set, a
532.Sq modification counter ,
533and whether the parity information (if any) on that
534component is known to be correct.
535Component labels are an integral part of the RAID set,
536since they are used to ensure that components
537are configured in the correct order, and used to keep track of other
538vital information about the RAID set.
539Component labels are also required for the auto-detection
540and auto-configuration of RAID sets at boot time.
541For a component label to be considered valid, that
542particular component label must be in agreement with the other
543component labels in the set.
544For example, the serial number,
545.Sq modification counter ,
546number of rows and number of columns must all be in agreement.
547If any of these are different, then the component is
548not considered to be part of the set.
549See
550.Xr raid 4
551for more information about component labels.
552.Pp
553Once the components have been identified, and the disks have
554appropriate labels,
555.Nm
556is then used to configure the
557.Xr raid 4
558device.
559To configure the device, a configuration file which looks something like:
560.Bd -literal -offset indent
561START array
562# numRow numCol numSpare
5631 3 1
564
565START disks
566/dev/sd1e
567/dev/sd2e
568/dev/sd3e
569
570START spare
571/dev/sd4e
572
573START layout
574# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
57532 1 1 5
576
577START queue
578fifo 100
579.Ed
580.Pp
581is created in a file.
582The above configuration file specifies a RAID 5
583set consisting of the components
584.Pa /dev/sd1e ,
585.Pa /dev/sd2e ,
586and
587.Pa /dev/sd3e ,
588with
589.Pa /dev/sd4e
590available as a
591.Sq hot spare
592in case one of the three main drives should fail.
593A RAID 0 set would be specified in a similar way:
594.Bd -literal -offset indent
595START array
596# numRow numCol numSpare
5971 4 0
598
599START disks
600/dev/sd10e
601/dev/sd11e
602/dev/sd12e
603/dev/sd13e
604
605START layout
606# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
60764 1 1 0
608
609START queue
610fifo 100
611.Ed
612.Pp
613In this case, devices
614.Pa /dev/sd10e ,
615.Pa /dev/sd11e ,
616.Pa /dev/sd12e ,
617and
618.Pa /dev/sd13e
619are the components that make up this RAID set.
620Note that there are no hot spares for a RAID 0 set,
621since there is no way to recover data if any of the components fail.
622.Pp
623For a RAID 1 (mirror) set, the following configuration might be used:
624.Bd -literal -offset indent
625START array
626# numRow numCol numSpare
6271 2 0
628
629START disks
630/dev/sd20e
631/dev/sd21e
632
633START layout
634# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
635128 1 1 1
636
637START queue
638fifo 100
639.Ed
640.Pp
641In this case,
642.Pa /dev/sd20e
643and
644.Pa /dev/sd21e
645are the two components of the mirror set.
646While no hot spares have been specified in this
647configuration, they easily could be, just as they were specified in
648the RAID 5 case above.
649Note as well that RAID 1 sets are currently limited to only 2 components.
650At present, n-way mirroring is not possible.
651.Pp
652The first time a RAID set is configured, the
653.Fl C
654option must be used:
655.Bd -literal -offset indent
656raidctl -C raid0.conf raid0
657.Ed
658.Pp
659where
660.Pa raid0.conf
661is the name of the RAID configuration file.
662The
663.Fl C
664forces the configuration to succeed, even if any of the component
665labels are incorrect.
666The
667.Fl C
668option should not be used lightly in
669situations other than initial configurations, as if
670the system is refusing to configure a RAID set, there is probably a
671very good reason for it.
672After the initial configuration is done (and
673appropriate component labels are added with the
674.Fl I
675option) then raid0 can be configured normally with:
676.Bd -literal -offset indent
677raidctl -c raid0.conf raid0
678.Ed
679.Pp
680When the RAID set is configured for the first time, it is
681necessary to initialize the component labels, and to initialize the
682parity on the RAID set.
683Initializing the component labels is done with:
684.Bd -literal -offset indent
685raidctl -I 112341 raid0
686.Ed
687.Pp
688where
689.Sq 112341
690is a user-specified serial number for the RAID set.
691This initialization step is
692.Em required
693for all RAID sets.
694As well, using different serial numbers between RAID sets is
695.Em strongly encouraged ,
696as using the same serial number for all RAID sets will only serve to
697decrease the usefulness of the component label checking.
698.Pp
699Initializing the RAID set is done via the
700.Fl i
701option.
702This initialization
703.Em MUST
704be done for
705.Em all
706RAID sets, since among other things it verifies that the parity (if
707any) on the RAID set is correct.
708Since this initialization may be quite time-consuming, the
709.Fl v
710option may be also used in conjunction with
711.Fl i :
712.Bd -literal -offset indent
713raidctl -iv raid0
714.Ed
715.Pp
716This will give more verbose output on the
717status of the initialization:
718.Bd -literal -offset indent
719Initiating re-write of parity
720Parity Re-write status:
721 10% |****                                   | ETA:    06:03 /
722.Ed
723.Pp
724The output provides a
725.Sq Percent Complete
726in both a numeric and graphical format, as well as an estimated time
727to completion of the operation.
728.Pp
729Since it is the parity that provides the
730.Sq redundancy
731part of RAID, it is critical that the parity is correct as much as possible.
732If the parity is not correct, then there is no
733guarantee that data will not be lost if a component fails.
734.Pp
735Once the parity is known to be correct, it is then safe to perform
736.Xr disklabel 8 ,
737.Xr newfs 8 ,
738or
739.Xr fsck 8
740on the device or its file systems, and then to mount the file systems
741for use.
742.Pp
743Under certain circumstances (e.g., the additional component has not
744arrived, or data is being migrated off of a disk destined to become a
745component) it may be desirable to configure a RAID 1 set with only
746a single component.
747This can be achieved by using the word
748.Dq absent
749to indicate that a particular component is not present.
750In the following:
751.Bd -literal -offset indent
752START array
753# numRow numCol numSpare
7541 2 0
755
756START disks
757absent
758/dev/sd0e
759
760START layout
761# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
762128 1 1 1
763
764START queue
765fifo 100
766.Ed
767.Pp
768.Pa /dev/sd0e
769is the real component, and will be the second disk of a RAID 1 set.
770The first component is simply marked as being absent.
771Configuration (using
772.Fl C
773and
774.Fl I Ar 12345
775as above) proceeds normally, but initialization of the RAID set will
776have to wait until all physical components are present.
777After configuration, this set can be used normally, but will be operating
778in degraded mode.
779Once a second physical component is obtained, it can be hot-added,
780the existing data mirrored, and normal operation resumed.
781.Pp
782The size of the resulting RAID set will depend on the number of data
783components in the set.
784Space is automatically reserved for the component labels, and
785the actual amount of space used
786for data on a component will be rounded down to the largest possible
787multiple of the sectors per stripe unit (sectPerSU) value.
788Thus, the amount of space provided by the RAID set will be less
789than the sum of the size of the components.
790.Ss Maintenance of the RAID set
791After the parity has been initialized for the first time, the command:
792.Bd -literal -offset indent
793raidctl -p raid0
794.Ed
795.Pp
796can be used to check the current status of the parity.
797To check the parity and rebuild it necessary (for example,
798after an unclean shutdown) the command:
799.Bd -literal -offset indent
800raidctl -P raid0
801.Ed
802.Pp
803is used.
804Note that re-writing the parity can be done while
805other operations on the RAID set are taking place (e.g., while doing a
806.Xr fsck 8
807on a file system on the RAID set).
808However: for maximum effectiveness of the RAID set, the parity should be
809known to be correct before any data on the set is modified.
810.Pp
811To see how the RAID set is doing, the following command can be used to
812show the RAID set's status:
813.Bd -literal -offset indent
814raidctl -s raid0
815.Ed
816.Pp
817The output will look something like:
818.Bd -literal -offset indent
819Components:
820           /dev/sd1e: optimal
821           /dev/sd2e: optimal
822           /dev/sd3e: optimal
823Spares:
824           /dev/sd4e: spare
825Component label for /dev/sd1e:
826   Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
827   Version: 2 Serial Number: 13432 Mod Counter: 65
828   Clean: No Status: 0
829   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
830   RAID Level: 5  blocksize: 512 numBlocks: 1799936
831   Autoconfig: No
832   Last configured as: raid0
833Component label for /dev/sd2e:
834   Row: 0 Column: 1 Num Rows: 1 Num Columns: 3
835   Version: 2 Serial Number: 13432 Mod Counter: 65
836   Clean: No Status: 0
837   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
838   RAID Level: 5  blocksize: 512 numBlocks: 1799936
839   Autoconfig: No
840   Last configured as: raid0
841Component label for /dev/sd3e:
842   Row: 0 Column: 2 Num Rows: 1 Num Columns: 3
843   Version: 2 Serial Number: 13432 Mod Counter: 65
844   Clean: No Status: 0
845   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
846   RAID Level: 5  blocksize: 512 numBlocks: 1799936
847   Autoconfig: No
848   Last configured as: raid0
849Parity status: clean
850Reconstruction is 100% complete.
851Parity Re-write is 100% complete.
852Copyback is 100% complete.
853.Ed
854.Pp
855This indicates that all is well with the RAID set.
856Of importance here are the component lines which read
857.Sq optimal ,
858and the
859.Sq Parity status
860line.
861.Sq Parity status: clean
862indicates that the parity is up-to-date for this RAID set,
863whether or not the RAID set is in redundant or degraded mode.
864.Sq Parity status: DIRTY
865indicates that it is not known if the parity information is
866consistent with the data, and that the parity information needs
867to be checked.
868Note that if there are file systems open on the RAID set,
869the individual components will not be
870.Sq clean
871but the set as a whole can still be clean.
872.Pp
873To check the component label of
874.Pa /dev/sd1e ,
875the following is used:
876.Bd -literal -offset indent
877raidctl -g /dev/sd1e raid0
878.Ed
879.Pp
880The output of this command will look something like:
881.Bd -literal -offset indent
882Component label for /dev/sd1e:
883   Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
884   Version: 2 Serial Number: 13432 Mod Counter: 65
885   Clean: No Status: 0
886   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
887   RAID Level: 5  blocksize: 512 numBlocks: 1799936
888   Autoconfig: No
889   Last configured as: raid0
890.Ed
891.Ss Dealing with Component Failures
892If for some reason
893(perhaps to test reconstruction) it is necessary to pretend a drive
894has failed, the following will perform that function:
895.Bd -literal -offset indent
896raidctl -f /dev/sd2e raid0
897.Ed
898.Pp
899The system will then be performing all operations in degraded mode,
900where missing data is re-computed from existing data and the parity.
901In this case, obtaining the status of raid0 will return (in part):
902.Bd -literal -offset indent
903Components:
904           /dev/sd1e: optimal
905           /dev/sd2e: failed
906           /dev/sd3e: optimal
907Spares:
908           /dev/sd4e: spare
909.Ed
910.Pp
911Note that with the use of
912.Fl f
913a reconstruction has not been started.
914To both fail the disk and start a reconstruction, the
915.Fl F
916option must be used:
917.Bd -literal -offset indent
918raidctl -F /dev/sd2e raid0
919.Ed
920.Pp
921The
922.Fl f
923option may be used first, and then the
924.Fl F
925option used later, on the same disk, if desired.
926Immediately after the reconstruction is started, the status will report:
927.Bd -literal -offset indent
928Components:
929           /dev/sd1e: optimal
930           /dev/sd2e: reconstructing
931           /dev/sd3e: optimal
932Spares:
933           /dev/sd4e: used_spare
934[...]
935Parity status: clean
936Reconstruction is 10% complete.
937Parity Re-write is 100% complete.
938Copyback is 100% complete.
939.Ed
940.Pp
941This indicates that a reconstruction is in progress.
942To find out how the reconstruction is progressing the
943.Fl S
944option may be used.
945This will indicate the progress in terms of the
946percentage of the reconstruction that is completed.
947When the reconstruction is finished the
948.Fl s
949option will show:
950.Bd -literal -offset indent
951Components:
952           /dev/sd1e: optimal
953           /dev/sd2e: spared
954           /dev/sd3e: optimal
955Spares:
956           /dev/sd4e: used_spare
957[...]
958Parity status: clean
959Reconstruction is 100% complete.
960Parity Re-write is 100% complete.
961Copyback is 100% complete.
962.Ed
963.Pp
964At this point there are at least two options.
965First, if
966.Pa /dev/sd2e
967is known to be good (i.e., the failure was either caused by
968.Fl f
969or
970.Fl F ,
971or the failed disk was replaced), then a copyback of the data can
972be initiated with the
973.Fl B
974option.
975In this example, this would copy the entire contents of
976.Pa /dev/sd4e
977to
978.Pa /dev/sd2e .
979Once the copyback procedure is complete, the
980status of the device would be (in part):
981.Bd -literal -offset indent
982Components:
983           /dev/sd1e: optimal
984           /dev/sd2e: optimal
985           /dev/sd3e: optimal
986Spares:
987           /dev/sd4e: spare
988.Ed
989.Pp
990and the system is back to normal operation.
991.Pp
992The second option after the reconstruction is to simply use
993.Pa /dev/sd4e
994in place of
995.Pa /dev/sd2e
996in the configuration file.
997For example, the configuration file (in part) might now look like:
998.Bd -literal -offset indent
999START array
10001 3 0
1001
1002START disks
1003/dev/sd1e
1004/dev/sd4e
1005/dev/sd3e
1006.Ed
1007.Pp
1008This can be done as
1009.Pa /dev/sd4e
1010is completely interchangeable with
1011.Pa /dev/sd2e
1012at this point.
1013Note that extreme care must be taken when
1014changing the order of the drives in a configuration.
1015This is one of the few instances where the devices and/or
1016their orderings can be changed without loss of data!
1017In general, the ordering of components in a configuration file should
1018.Em never
1019be changed.
1020.Pp
1021If a component fails and there are no hot spares
1022available on-line, the status of the RAID set might (in part) look like:
1023.Bd -literal -offset indent
1024Components:
1025           /dev/sd1e: optimal
1026           /dev/sd2e: failed
1027           /dev/sd3e: optimal
1028No spares.
1029.Ed
1030.Pp
1031In this case there are a number of options.
1032The first option is to add a hot spare using:
1033.Bd -literal -offset indent
1034raidctl -a /dev/sd4e raid0
1035.Ed
1036.Pp
1037After the hot add, the status would then be:
1038.Bd -literal -offset indent
1039Components:
1040           /dev/sd1e: optimal
1041           /dev/sd2e: failed
1042           /dev/sd3e: optimal
1043Spares:
1044           /dev/sd4e: spare
1045.Ed
1046.Pp
1047Reconstruction could then take place using
1048.Fl F
1049as describe above.
1050.Pp
1051A second option is to rebuild directly onto
1052.Pa /dev/sd2e .
1053Once the disk containing
1054.Pa /dev/sd2e
1055has been replaced, one can simply use:
1056.Bd -literal -offset indent
1057raidctl -R /dev/sd2e raid0
1058.Ed
1059.Pp
1060to rebuild the
1061.Pa /dev/sd2e
1062component.
1063As the rebuilding is in progress, the status will be:
1064.Bd -literal -offset indent
1065Components:
1066           /dev/sd1e: optimal
1067           /dev/sd2e: reconstructing
1068           /dev/sd3e: optimal
1069No spares.
1070.Ed
1071.Pp
1072and when completed, will be:
1073.Bd -literal -offset indent
1074Components:
1075           /dev/sd1e: optimal
1076           /dev/sd2e: optimal
1077           /dev/sd3e: optimal
1078No spares.
1079.Ed
1080.Pp
1081In circumstances where a particular component is completely
1082unavailable after a reboot, a special component name will be used to
1083indicate the missing component.
1084For example:
1085.Bd -literal -offset indent
1086Components:
1087           /dev/sd2e: optimal
1088          component1: failed
1089No spares.
1090.Ed
1091.Pp
1092indicates that the second component of this RAID set was not detected
1093at all by the auto-configuration code.
1094The name
1095.Sq component1
1096can be used anywhere a normal component name would be used.
1097For example, to add a hot spare to the above set, and rebuild to that hot
1098spare, the following could be done:
1099.Bd -literal -offset indent
1100raidctl -a /dev/sd3e raid0
1101raidctl -F component1 raid0
1102.Ed
1103.Pp
1104at which point the data missing from
1105.Sq component1
1106would be reconstructed onto
1107.Pa /dev/sd3e .
1108.Pp
1109When more than one component is marked as
1110.Sq failed
1111due to a non-component hardware failure (e.g., loss of power to two
1112components, adapter problems, termination problems, or cabling issues) it
1113is quite possible to recover the data on the RAID set.
1114The first thing to be aware of is that the first disk to fail will
1115almost certainly be out-of-sync with the remainder of the array.
1116If any IO was performed between the time the first component is considered
1117.Sq failed
1118and when the second component is considered
1119.Sq failed ,
1120then the first component to fail will
1121.Em not
1122contain correct data, and should be ignored.
1123When the second component is marked as failed, however, the RAID device will
1124(currently) panic the system.
1125At this point the data on the RAID set
1126(not including the first failed component) is still self consistent,
1127and will be in no worse state of repair than had the power gone out in
1128the middle of a write to a file system on a non-RAID device.
1129The problem, however, is that the component labels may now have 3 different
1130.Sq modification counters
1131(one value on the first component that failed, one value on the second
1132component that failed, and a third value on the remaining components).
1133In such a situation, the RAID set will not autoconfigure,
1134and can only be forcibly re-configured
1135with the
1136.Fl C
1137option.
1138To recover the RAID set, one must first remedy whatever physical
1139problem caused the multiple-component failure.
1140After that is done, the RAID set can be restored by forcibly
1141configuring the raid set
1142.Em without
1143the component that failed first.
1144For example, if
1145.Pa /dev/sd1e
1146and
1147.Pa /dev/sd2e
1148fail (in that order) in a RAID set of the following configuration:
1149.Bd -literal -offset indent
1150START array
11511 4 0
1152
1153START disks
1154/dev/sd1e
1155/dev/sd2e
1156/dev/sd3e
1157/dev/sd4e
1158
1159START layout
1160# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
116164 1 1 5
1162
1163START queue
1164fifo 100
1165
1166.Ed
1167.Pp
1168then the following configuration (say "recover_raid0.conf")
1169.Bd -literal -offset indent
1170START array
11711 4 0
1172
1173START disks
1174absent
1175/dev/sd2e
1176/dev/sd3e
1177/dev/sd4e
1178
1179START layout
1180# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
118164 1 1 5
1182
1183START queue
1184fifo 100
1185.Ed
1186.Pp
1187can be used with
1188.Bd -literal -offset indent
1189raidctl -C recover_raid0.conf raid0
1190.Ed
1191.Pp
1192to force the configuration of raid0.
1193A
1194.Bd -literal -offset indent
1195raidctl -I 12345 raid0
1196.Ed
1197.Pp
1198will be required in order to synchronize the component labels.
1199At this point the file systems on the RAID set can then be checked and
1200corrected.
1201To complete the re-construction of the RAID set,
1202.Pa /dev/sd1e
1203is simply hot-added back into the array, and reconstructed
1204as described earlier.
1205.Ss RAID on RAID
1206RAID sets can be layered to create more complex and much larger RAID sets.
1207A RAID 0 set, for example, could be constructed from four RAID 5 sets.
1208The following configuration file shows such a setup:
1209.Bd -literal -offset indent
1210START array
1211# numRow numCol numSpare
12121 4 0
1213
1214START disks
1215/dev/raid1e
1216/dev/raid2e
1217/dev/raid3e
1218/dev/raid4e
1219
1220START layout
1221# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
1222128 1 1 0
1223
1224START queue
1225fifo 100
1226.Ed
1227.Pp
1228A similar configuration file might be used for a RAID 0 set
1229constructed from components on RAID 1 sets.
1230In such a configuration, the mirroring provides a high degree
1231of redundancy, while the striping provides additional speed benefits.
1232.Ss Auto-configuration and Root on RAID
1233RAID sets can also be auto-configured at boot.
1234To make a set auto-configurable,
1235simply prepare the RAID set as above, and then do a:
1236.Bd -literal -offset indent
1237raidctl -A yes raid0
1238.Ed
1239.Pp
1240to turn on auto-configuration for that set.
1241To turn off auto-configuration, use:
1242.Bd -literal -offset indent
1243raidctl -A no raid0
1244.Ed
1245.Pp
1246RAID sets which are auto-configurable will be configured before the
1247root file system is mounted.
1248These RAID sets are thus available for
1249use as a root file system, or for any other file system.
1250A primary advantage of using the auto-configuration is that RAID components
1251become more independent of the disks they reside on.
1252For example, SCSI ID's can change, but auto-configured sets will always be
1253configured correctly, even if the SCSI ID's of the component disks
1254have become scrambled.
1255.Pp
1256Having a system's root file system
1257.Pq Pa /
1258on a RAID set is also allowed, with the
1259.Sq a
1260partition of such a RAID set being used for
1261.Pa / .
1262To use raid0a as the root file system, simply use:
1263.Bd -literal -offset indent
1264raidctl -A root raid0
1265.Ed
1266.Pp
1267To return raid0a to be just an auto-configuring set simply use the
1268.Fl A Ar yes
1269arguments.
1270.Pp
1271Note that kernels can only be directly read from RAID 1 components on
1272architectures that support that
1273.Pq currently alpha, i386, pmax, sparc, sparc64, and vax .
1274On those architectures, the
1275.Dv FS_RAID
1276file system is recognized by the bootblocks, and will properly load the
1277kernel directly from a RAID 1 component.
1278For other architectures, or to support the root file system
1279on other RAID sets, some other mechanism must be used to get a kernel booting.
1280For example, a small partition containing only the secondary boot-blocks
1281and an alternate kernel (or two) could be used.
1282Once a kernel is booting however, and an auto-configuring RAID set is
1283found that is eligible to be root, then that RAID set will be
1284auto-configured and used as the root device.
1285If two or more RAID sets claim to be root devices, then the
1286user will be prompted to select the root device.
1287At this time, RAID 0, 1, 4, and 5 sets are all supported as root devices.
1288.Pp
1289A typical RAID 1 setup with root on RAID might be as follows:
1290.Bl -enum
1291.It
1292wd0a - a small partition, which contains a complete, bootable, basic
1293.Nx
1294installation.
1295.It
1296wd1a - also contains a complete, bootable, basic
1297.Nx
1298installation.
1299.It
1300wd0e and wd1e - a RAID 1 set, raid0, used for the root file system.
1301.It
1302wd0f and wd1f - a RAID 1 set, raid1, which will be used only for
1303swap space.
1304.It
1305wd0g and wd1g - a RAID 1 set, raid2, used for
1306.Pa /usr ,
1307.Pa /home ,
1308or other data, if desired.
1309.It
1310wd0h and wd1h - a RAID 1 set, raid3, if desired.
1311.El
1312.Pp
1313RAID sets raid0, raid1, and raid2 are all marked as auto-configurable.
1314raid0 is marked as being a root file system.
1315When new kernels are installed, the kernel is not only copied to
1316.Pa / ,
1317but also to wd0a and wd1a.
1318The kernel on wd0a is required, since that
1319is the kernel the system boots from.
1320The kernel on wd1a is also
1321required, since that will be the kernel used should wd0 fail.
1322The important point here is to have redundant copies of the kernel
1323available, in the event that one of the drives fail.
1324.Pp
1325There is no requirement that the root file system be on the same disk
1326as the kernel.
1327For example, obtaining the kernel from wd0a, and using
1328sd0e and sd1e for raid0, and the root file system, is fine.
1329It
1330.Em is
1331critical, however, that there be multiple kernels available, in the
1332event of media failure.
1333.Pp
1334Multi-layered RAID devices (such as a RAID 0 set made
1335up of RAID 1 sets) are
1336.Em not
1337supported as root devices or auto-configurable devices at this point.
1338(Multi-layered RAID devices
1339.Em are
1340supported in general, however, as mentioned earlier.)
1341Note that in order to enable component auto-detection and
1342auto-configuration of RAID devices, the line:
1343.Bd -literal -offset indent
1344options    RAID_AUTOCONFIG
1345.Ed
1346.Pp
1347must be in the kernel configuration file.
1348See
1349.Xr raid 4
1350for more details.
1351.Ss Swapping on RAID
1352A RAID device can be used as a swap device.
1353In order to ensure that a RAID device used as a swap device
1354is correctly unconfigured when the system is shutdown or rebooted,
1355it is recommended that the line
1356.Bd -literal -offset indent
1357swapoff=YES
1358.Ed
1359.Pp
1360be added to
1361.Pa /etc/rc.conf .
1362.Ss Unconfiguration
1363The final operation performed by
1364.Nm
1365is to unconfigure a
1366.Xr raid 4
1367device.
1368This is accomplished via a simple:
1369.Bd -literal -offset indent
1370raidctl -u raid0
1371.Ed
1372.Pp
1373at which point the device is ready to be reconfigured.
1374.Ss Performance Tuning
1375Selection of the various parameter values which result in the best
1376performance can be quite tricky, and often requires a bit of
1377trial-and-error to get those values most appropriate for a given system.
1378A whole range of factors come into play, including:
1379.Bl -enum
1380.It
1381Types of components (e.g., SCSI vs. IDE) and their bandwidth
1382.It
1383Types of controller cards and their bandwidth
1384.It
1385Distribution of components among controllers
1386.It
1387IO bandwidth
1388.It
1389file system access patterns
1390.It
1391CPU speed
1392.El
1393.Pp
1394As with most performance tuning, benchmarking under real-life loads
1395may be the only way to measure expected performance.
1396Understanding some of the underlying technology is also useful in tuning.
1397The goal of this section is to provide pointers to those parameters which may
1398make significant differences in performance.
1399.Pp
1400For a RAID 1 set, a SectPerSU value of 64 or 128 is typically sufficient.
1401Since data in a RAID 1 set is arranged in a linear
1402fashion on each component, selecting an appropriate stripe size is
1403somewhat less critical than it is for a RAID 5 set.
1404However: a stripe size that is too small will cause large IO's to be
1405broken up into a number of smaller ones, hurting performance.
1406At the same time, a large stripe size may cause problems with
1407concurrent accesses to stripes, which may also affect performance.
1408Thus values in the range of 32 to 128 are often the most effective.
1409.Pp
1410Tuning RAID 5 sets is trickier.
1411In the best case, IO is presented to the RAID set one stripe at a time.
1412Since the entire stripe is available at the beginning of the IO,
1413the parity of that stripe can be calculated before the stripe is written,
1414and then the stripe data and parity can be written in parallel.
1415When the amount of data being written is less than a full stripe worth, the
1416.Sq small write
1417problem occurs.
1418Since a
1419.Sq small write
1420means only a portion of the stripe on the components is going to
1421change, the data (and parity) on the components must be updated
1422slightly differently.
1423First, the
1424.Sq old parity
1425and
1426.Sq old data
1427must be read from the components.
1428Then the new parity is constructed,
1429using the new data to be written, and the old data and old parity.
1430Finally, the new data and new parity are written.
1431All this extra data shuffling results in a serious loss of performance,
1432and is typically 2 to 4 times slower than a full stripe write (or read).
1433To combat this problem in the real world, it may be useful
1434to ensure that stripe sizes are small enough that a
1435.Sq large IO
1436from the system will use exactly one large stripe write.
1437As is seen later, there are some file system dependencies
1438which may come into play here as well.
1439.Pp
1440Since the size of a
1441.Sq large IO
1442is often (currently) only 32K or 64K, on a 5-drive RAID 5 set it may
1443be desirable to select a SectPerSU value of 16 blocks (8K) or 32
1444blocks (16K).
1445Since there are 4 data sectors per stripe, the maximum
1446data per stripe is 64 blocks (32K) or 128 blocks (64K).
1447Again, empirical measurement will provide the best indicators of which
1448values will yield better performance.
1449.Pp
1450The parameters used for the file system are also critical to good performance.
1451For
1452.Xr newfs 8 ,
1453for example, increasing the block size to 32K or 64K may improve
1454performance dramatically.
1455As well, changing the cylinders-per-group
1456parameter from 16 to 32 or higher is often not only necessary for
1457larger file systems, but may also have positive performance implications.
1458.Ss Summary
1459Despite the length of this man-page, configuring a RAID set is a
1460relatively straight-forward process.
1461All that needs to be done is the following steps:
1462.Bl -enum
1463.It
1464Use
1465.Xr disklabel 8
1466to create the components (of type RAID).
1467.It
1468Construct a RAID configuration file: e.g.,
1469.Pa raid0.conf
1470.It
1471Configure the RAID set with:
1472.Bd -literal -offset indent
1473raidctl -C raid0.conf raid0
1474.Ed
1475.Pp
1476.It
1477Initialize the component labels with:
1478.Bd -literal -offset indent
1479raidctl -I 123456 raid0
1480.Ed
1481.Pp
1482.It
1483Initialize other important parts of the set with:
1484.Bd -literal -offset indent
1485raidctl -i raid0
1486.Ed
1487.Pp
1488.It
1489Get the default label for the RAID set:
1490.Bd -literal -offset indent
1491disklabel raid0 \*[Gt] /tmp/label
1492.Ed
1493.Pp
1494.It
1495Edit the label:
1496.Bd -literal -offset indent
1497vi /tmp/label
1498.Ed
1499.Pp
1500.It
1501Put the new label on the RAID set:
1502.Bd -literal -offset indent
1503disklabel -R -r raid0 /tmp/label
1504.Ed
1505.Pp
1506.It
1507Create the file system:
1508.Bd -literal -offset indent
1509newfs /dev/rraid0e
1510.Ed
1511.Pp
1512.It
1513Mount the file system:
1514.Bd -literal -offset indent
1515mount /dev/raid0e /mnt
1516.Ed
1517.Pp
1518.It
1519Use:
1520.Bd -literal -offset indent
1521raidctl -c raid0.conf raid0
1522.Ed
1523.Pp
1524To re-configure the RAID set the next time it is needed, or put
1525.Pa raid0.conf
1526into
1527.Pa /etc
1528where it will automatically be started by the
1529.Pa /etc/rc.d
1530scripts.
1531.El
1532.Sh SEE ALSO
1533.Xr ccd 4 ,
1534.Xr raid 4 ,
1535.Xr rc 8
1536.Sh HISTORY
1537RAIDframe is a framework for rapid prototyping of RAID structures
1538developed by the folks at the Parallel Data Laboratory at Carnegie
1539Mellon University (CMU).
1540A more complete description of the internals and functionality of
1541RAIDframe is found in the paper "RAIDframe: A Rapid Prototyping Tool
1542for RAID Systems", by William V. Courtright II, Garth Gibson, Mark
1543Holland, LeAnn Neal Reilly, and Jim Zelenka, and published by the
1544Parallel Data Laboratory of Carnegie Mellon University.
1545.Pp
1546The
1547.Nm
1548command first appeared as a program in CMU's RAIDframe v1.1 distribution.
1549This version of
1550.Nm
1551is a complete re-write, and first appeared in
1552.Nx 1.4 .
1553.Sh COPYRIGHT
1554.Bd -literal
1555The RAIDframe Copyright is as follows:
1556
1557Copyright (c) 1994-1996 Carnegie-Mellon University.
1558All rights reserved.
1559
1560Permission to use, copy, modify and distribute this software and
1561its documentation is hereby granted, provided that both the copyright
1562notice and this permission notice appear in all copies of the
1563software, derivative works or modified versions, and any portions
1564thereof, and that both notices appear in supporting documentation.
1565
1566CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
1567CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
1568FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
1569
1570Carnegie Mellon requests users of this software to return to
1571
1572 Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
1573 School of Computer Science
1574 Carnegie Mellon University
1575 Pittsburgh PA 15213-3890
1576
1577any improvements or extensions that they make and grant Carnegie the
1578rights to redistribute these changes.
1579.Ed
1580.Sh WARNINGS
1581Certain RAID levels (1, 4, 5, 6, and others) can protect against some
1582data loss due to component failure.
1583However the loss of two components of a RAID 4 or 5 system,
1584or the loss of a single component of a RAID 0 system will
1585result in the entire file system being lost.
1586RAID is
1587.Em NOT
1588a substitute for good backup practices.
1589.Pp
1590Recomputation of parity
1591.Em MUST
1592be performed whenever there is a chance that it may have been compromised.
1593This includes after system crashes, or before a RAID
1594device has been used for the first time.
1595Failure to keep parity correct will be catastrophic should a
1596component ever fail \(em it is better to use RAID 0 and get the
1597additional space and speed, than it is to use parity, but
1598not keep the parity correct.
1599At least with RAID 0 there is no perception of increased data security.
1600.Sh BUGS
1601Hot-spare removal is currently not available.
1602