xref: /openbsd/share/man/man4/softraid.4 (revision 4bdff4be)
1.\"	$OpenBSD: softraid.4,v 1.54 2023/04/13 10:23:21 kn Exp $
2.\"
3.\" Copyright (c) 2007 Todd T. Fries   <todd@OpenBSD.org>
4.\" Copyright (c) 2007 Marco Peereboom <marco@OpenBSD.org>
5.\"
6.\" Permission to use, copy, modify, and distribute this software for any
7.\" purpose with or without fee is hereby granted, provided that the above
8.\" copyright notice and this permission notice appear in all copies.
9.\"
10.\" THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
11.\" WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
12.\" MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
13.\" ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
14.\" WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
15.\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
16.\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
17.\"
18.Dd $Mdocdate: April 13 2023 $
19.Dt SOFTRAID 4
20.Os
21.Sh NAME
22.Nm softraid
23.Nd software RAID
24.Sh SYNOPSIS
25.Cd "softraid0 at root"
26.Sh DESCRIPTION
27The
28.Nm
29device emulates a Host Bus Adapter (HBA) that provides RAID and other I/O
30related services.
31The
32.Nm
33device provides a scaffold to implement more complex I/O transformation
34disciplines.
35For example, one can tie chunks together into a mirroring discipline.
36There really is no limit on what type of discipline one can write as long
37as it fits the SCSI model.
38.Pp
39.Nm
40supports a number of
41.Em disciplines .
42A discipline is a collection of functions
43that provides specific I/O functionality.
44This includes I/O path, bring-up, failure recovery, and statistical
45information gathering.
46Essentially a discipline is a lower
47level driver that provides the I/O transformation for the softraid
48device.
49.Pp
50A
51.Em volume
52is a virtual disk device that is made up of a collection of chunks.
53.Pp
54A
55.Em chunk
56is a partition or storage area of fstype
57.Dq RAID .
58.Xr disklabel 8
59is used to alter the fstype.
60.Pp
61Currently
62.Nm
63supports the following disciplines:
64.Bl -ohang -offset indent
65.It RAID 0
66A
67.Em striping
68discipline.
69It segments data over a number of chunks to increase performance.
70RAID 0 does not provide for data loss (redundancy).
71.It RAID 1
72A
73.Em mirroring
74discipline.
75It copies data across more than one chunk to provide for data loss.
76Read performance is increased,
77though at the cost of write speed.
78Unlike traditional RAID 1,
79.Nm
80supports the use of more than two chunks in a RAID 1 setup.
81.It RAID 5
82A striping discipline with
83.Em floating parity
84across all chunks.
85It stripes data across chunks and provides parity to prevent data loss of
86a single chunk failure.
87Read performance is increased;
88write performance does incur additional overhead.
89.It CRYPTO
90An
91.Em encrypting
92discipline.
93It encrypts data on a single chunk to provide for data confidentiality.
94CRYPTO does not provide redundancy.
95.It CONCAT
96A
97.Em concatenating
98discipline.
99It writes data to each chunk in sequence to provide increased capacity.
100CONCAT does not provide redundancy.
101.It RAID 1C
102A
103.Em mirroring
104and
105.Em encrypting
106discipline.
107It encrypts data to provide for data confidentiality and copies the
108encrypted data across more than one chunk to prevent data loss in
109case of a chunk failure.
110Unlike traditional RAID 1,
111.Nm
112supports the use of more than two chunks in a RAID 1C setup.
113.El
114.Pp
115.Xr installboot 8
116may be used to install
117.Xr boot 8
118in the boot storage area of the
119.Nm
120volume.
121Boot support is currently limited to the CRYPTO, RAID 1 disciplines
122on the amd64, arm64, i386, riscv64 and sparc64 platforms.
123amd64, arm64, riscv64 and sparc64 also have boot support for the RAID 1C discipline.
124On sparc64, bootable chunks must be RAID partitions using the letter
125.Sq a .
126At the
127.Xr boot 8
128prompt, softraid volumes have names beginning with
129.Sq sr
130and can be booted from like a normal disk device.
131CRYPTO and 1C volumes will require a decryption passphrase or keydisk
132at boot time.
133.Pp
134The status of
135.Nm
136volumes is reported via
137.Xr sysctl 8
138such that it can be monitored by
139.Xr sensorsd 8 .
140Each volume has one fourth level node named
141.Va hw.sensors.softraid0.drive Ns Ar N ,
142where
143.Ar N
144is a small integer indexing the volume.
145The format of the volume status is:
146.Pp
147.D1 Ar value Po Ar device Pc , Ar status
148.Pp
149The
150.Ar device
151identifies the
152.Nm
153volume.
154The following combinations of
155.Ar value
156and
157.Ar status
158can occur:
159.Bl -tag -width Ds -offset indent
160.It Sy online , OK
161The volume is operating normally.
162.It Sy degraded , WARNING
163The volume as a whole is operational, but not all of its chunks are.
164In many cases, using
165.Xr bioctl 8
166.Fl R
167to rebuild the failed chunk is advisable.
168.It Sy rebuilding , WARNING
169A rebuild operation was recently started and has not yet completed.
170.It Sy failed , CRITICAL
171The device is currently unable to process I/O.
172.It Sy unknown , UNKNOWN
173The status is unknown to the system.
174.El
175.Sh EXAMPLES
176An example to create a 3 chunk RAID 1 from scratch is as follows:
177.Pp
178Initialize the partition tables of all disks:
179.Bd -literal -offset indent
180# fdisk -iy wd1
181# fdisk -iy wd2
182# fdisk -iy wd3
183.Ed
184.Pp
185Now create RAID partitions on all disks:
186.Bd -literal -offset indent
187# echo 'RAID *' | disklabel -wAT- wd1
188# echo 'RAID *' | disklabel -wAT- wd2
189# echo 'RAID *' | disklabel -wAT- wd3
190.Ed
191.Pp
192Assemble the RAID volume:
193.Bd -literal -offset indent
194# bioctl -c 1 -l /dev/wd1a,/dev/wd2a,/dev/wd3a softraid0
195.Ed
196.Pp
197The console will show what device was added to the system:
198.Bd -literal -offset indent
199scsibus0 at softraid0: 1 targets
200sd0 at scsibus0 targ 0 lun 0: <OPENBSD, SR RAID 1, 001> SCSI2
201sd0: 1MB, 0 cyl, 255 head, 63 sec, 512 bytes/sec, 3714 sec total
202.Ed
203.Pp
204It is good practice to wipe the front of the disk before using it:
205.Bd -literal -offset indent
206# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
207.Ed
208.Pp
209Initialize the partition table and create a filesystem on the
210new RAID volume:
211.Bd -literal -offset indent
212# fdisk -iy sd0
213# echo '/ *' | disklabel -wAT- sd0
214# newfs /dev/rsd0a
215.Ed
216.Pp
217The RAID volume is now ready to be used as a normal disk device.
218See
219.Xr bioctl 8
220for more information on configuration of RAID sets.
221.Pp
222Install
223.Xr boot 8
224on the RAID volume:
225.Bd -literal -offset indent
226# installboot sd0
227.Ed
228.Pp
229At the
230.Xr boot 8
231prompt, load the /bsd kernel from the RAID volume:
232.Bd -literal -offset indent
233boot> boot sr0a:/bsd
234.Ed
235.Sh SEE ALSO
236.Xr bio 4 ,
237.Xr bioctl 8 ,
238.Xr boot_sparc64 8 ,
239.Xr disklabel 8 ,
240.Xr fdisk 8 ,
241.Xr installboot 8 ,
242.Xr newfs 8
243.Sh HISTORY
244The
245.Nm
246driver first appeared in
247.Ox 4.2 .
248.Sh AUTHORS
249.An Marco Peereboom .
250.Sh CAVEATS
251The driver relies on underlying hardware to properly fail chunks.
252.Pp
253The RAID 1 discipline does not initialize the mirror upon creation.
254This is by design because all sectors that are read are written first.
255There is no point in wasting a lot of time syncing random data.
256.Pp
257The RAID 5 discipline does not initialize parity upon creation, instead parity
258is only updated upon write.
259.Pp
260Stacking disciplines (CRYPTO on top of RAID 1, for example) is not
261supported at this time.
262.Pp
263Currently there is no automated mechanism to recover from failed disks.
264.Pp
265Certain RAID levels can protect against some data loss
266due to component failure.
267RAID is
268.Em not
269a substitute for good backup practices.
270