xref: /openbsd/share/man/man4/softraid.4 (revision 73471bf0)
1.\"	$OpenBSD: softraid.4,v 1.46 2021/12/16 17:07:56 tj Exp $
2.\"
3.\" Copyright (c) 2007 Todd T. Fries   <todd@OpenBSD.org>
4.\" Copyright (c) 2007 Marco Peereboom <marco@OpenBSD.org>
5.\"
6.\" Permission to use, copy, modify, and distribute this software for any
7.\" purpose with or without fee is hereby granted, provided that the above
8.\" copyright notice and this permission notice appear in all copies.
9.\"
10.\" THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
11.\" WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
12.\" MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
13.\" ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
14.\" WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
15.\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
16.\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
17.\"
18.Dd $Mdocdate: December 16 2021 $
19.Dt SOFTRAID 4
20.Os
21.Sh NAME
22.Nm softraid
23.Nd software RAID
24.Sh SYNOPSIS
25.Cd "softraid0 at root"
26.Sh DESCRIPTION
27The
28.Nm
29device emulates a Host Bus Adapter (HBA) that provides RAID and other I/O
30related services.
31The
32.Nm
33device provides a scaffold to implement more complex I/O transformation
34disciplines.
35For example, one can tie chunks together into a mirroring discipline.
36There really is no limit on what type of discipline one can write as long
37as it fits the SCSI model.
38.Pp
39.Nm
40supports a number of
41.Em disciplines .
42A discipline is a collection of functions
43that provides specific I/O functionality.
44This includes I/O path, bring-up, failure recovery, and statistical
45information gathering.
46Essentially a discipline is a lower
47level driver that provides the I/O transformation for the softraid
48device.
49.Pp
50A
51.Em volume
52is a virtual disk device that is made up of a collection of chunks.
53.Pp
54A
55.Em chunk
56is a partition or storage area of fstype
57.Dq RAID .
58.Xr disklabel 8
59is used to alter the fstype.
60.Pp
61Currently
62.Nm
63supports the following disciplines:
64.Bl -ohang -offset indent
65.It RAID 0
66A
67.Em striping
68discipline.
69It segments data over a number of chunks to increase performance.
70RAID 0 does not provide for data loss (redundancy).
71.It RAID 1
72A
73.Em mirroring
74discipline.
75It copies data across more than one chunk to provide for data loss.
76Read performance is increased,
77though at the cost of write speed.
78Unlike traditional RAID 1,
79.Nm
80supports the use of more than two chunks in a RAID 1 setup.
81.It RAID 5
82A striping discipline with
83.Em floating parity
84across all chunks.
85It stripes data across chunks and provides parity to prevent data loss of
86a single chunk failure.
87Read performance is increased;
88write performance does incur additional overhead.
89.It CRYPTO
90An
91.Em encrypting
92discipline.
93It encrypts data on a single chunk to provide for data confidentiality.
94CRYPTO does not provide redundancy.
95.It CONCAT
96A
97.Em concatenating
98discipline.
99It writes data to each chunk in sequence to provide increased capacity.
100CONCAT does not provide redundancy.
101.It RAID 1C
102A
103.Em mirroring
104and
105.Em encrypting
106discipline.
107It encrypts data to provide for data confidentiality and copies the
108encrypted data across more than one chunk to prevent data loss in
109case of a chunk failure.
110Unlike traditional RAID 1,
111.Nm
112supports the use of more than two chunks in a RAID 1C setup.
113.El
114.Pp
115.Xr installboot 8
116may be used to install
117.Xr boot 8
118in the boot storage area of the
119.Nm
120volume.
121Boot support is currently limited to the CRYPTO and RAID 1 disciplines
122on amd64, arm64, i386, and sparc64 platforms.
123On sparc64, bootable chunks must be RAID partitions using the letter
124.Sq a .
125At the
126.Xr boot 8
127prompt, softraid volumes have names beginning with
128.Sq sr
129and can be booted from like a normal disk device.
130CRYPTO volumes will require a decryption passphrase or keydisk at boot time.
131.Pp
132The status of
133.Nm
134volumes is reported via
135.Xr sysctl 8
136such that it can be monitored by
137.Xr sensorsd 8 .
138Each volume has one fourth level node named
139.Va hw.sensors.softraid0.drive Ns Ar N ,
140where
141.Ar N
142is a small integer indexing the volume.
143The format of the volume status is:
144.Pp
145.D1 Ar value Po Ar device Pc , Ar status
146.Pp
147The
148.Ar device
149identifies the
150.Nm
151volume.
152The following combinations of
153.Ar value
154and
155.Ar status
156can occur:
157.Bl -tag -width Ds -offset indent
158.It Sy online , OK
159The volume is operating normally.
160.It Sy degraded , WARNING
161The volume as a whole is operational, but not all of its chunks are.
162In many cases, using
163.Xr bioctl 8
164.Fl R
165to rebuild the failed chunk is advisable.
166.It Sy rebuilding , WARNING
167A rebuild operation was recently started and has not yet completed.
168.It Sy failed , CRITICAL
169The device is currently unable to process I/O.
170.It Sy unknown , UNKNOWN
171The status is unknown to the system.
172.El
173.Sh EXAMPLES
174An example to create a 3 chunk RAID 1 from scratch is as follows:
175.Pp
176Initialize the partition tables of all disks:
177.Bd -literal -offset indent
178# fdisk -iy wd1
179# fdisk -iy wd2
180# fdisk -iy wd3
181.Ed
182.Pp
183Now create RAID partitions on all disks:
184.Bd -literal -offset indent
185# printf "a\en\en\en\enRAID\enw\enq\en" | disklabel -E wd1
186# printf "a\en\en\en\enRAID\enw\enq\en" | disklabel -E wd2
187# printf "a\en\en\en\enRAID\enw\enq\en" | disklabel -E wd3
188.Ed
189.Pp
190Assemble the RAID volume:
191.Bd -literal -offset indent
192# bioctl -c 1 -l /dev/wd1a,/dev/wd2a,/dev/wd3a softraid0
193.Ed
194.Pp
195The console will show what device was added to the system:
196.Bd -literal -offset indent
197scsibus0 at softraid0: 1 targets
198sd0 at scsibus0 targ 0 lun 0: <OPENBSD, SR RAID 1, 001> SCSI2
199sd0: 1MB, 0 cyl, 255 head, 63 sec, 512 bytes/sec, 3714 sec total
200.Ed
201.Pp
202It is good practice to wipe the front of the disk before using it:
203.Bd -literal -offset indent
204# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
205.Ed
206.Pp
207Initialize the partition table and create a filesystem on the
208new RAID volume:
209.Bd -literal -offset indent
210# fdisk -iy sd0
211# printf "a\en\en\en\en4.2BSD\enw\enq\en" | disklabel -E sd0
212# newfs /dev/rsd0a
213.Ed
214.Pp
215The RAID volume is now ready to be used as a normal disk device.
216See
217.Xr bioctl 8
218for more information on configuration of RAID sets.
219.Pp
220Install
221.Xr boot 8
222on the RAID volume:
223.Bd -literal -offset indent
224# installboot sd0
225.Ed
226.Pp
227At the
228.Xr boot 8
229prompt, load the /bsd kernel from the RAID volume:
230.Bd -literal -offset indent
231boot> boot sr0a:/bsd
232.Ed
233.Sh SEE ALSO
234.Xr bio 4 ,
235.Xr bioctl 8 ,
236.Xr boot_sparc64 8 ,
237.Xr disklabel 8 ,
238.Xr fdisk 8 ,
239.Xr installboot 8 ,
240.Xr newfs 8
241.Sh HISTORY
242The
243.Nm
244driver first appeared in
245.Ox 4.2 .
246.Sh AUTHORS
247.An Marco Peereboom .
248.Sh CAVEATS
249The driver relies on underlying hardware to properly fail chunks.
250.Pp
251The RAID 1 discipline does not initialize the mirror upon creation.
252This is by design because all sectors that are read are written first.
253There is no point in wasting a lot of time syncing random data.
254.Pp
255The RAID 5 discipline does not initialize parity upon creation, instead parity
256is only updated upon write.
257.Pp
258Stacking disciplines (CRYPTO on top of RAID 1, for example) is not
259supported at this time.
260.Pp
261Currently there is no automated mechanism to recover from failed disks.
262.Pp
263Certain RAID levels can protect against some data loss
264due to component failure.
265RAID is
266.Em not
267a substitute for good backup practices.
268