1.\" $OpenBSD: crash.8,v 1.37 2022/09/11 06:38:11 jmc Exp $ 2.\" 3.\" Copyright (c) 1980, 1991 The Regents of the University of California. 4.\" All rights reserved. 5.\" 6.\" Redistribution and use in source and binary forms, with or without 7.\" modification, are permitted provided that the following conditions 8.\" are met: 9.\" 1. Redistributions of source code must retain the above copyright 10.\" notice, this list of conditions and the following disclaimer. 11.\" 2. Redistributions in binary form must reproduce the above copyright 12.\" notice, this list of conditions and the following disclaimer in the 13.\" documentation and/or other materials provided with the distribution. 14.\" 3. Neither the name of the University nor the names of its contributors 15.\" may be used to endorse or promote products derived from this software 16.\" without specific prior written permission. 17.\" 18.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND 19.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 20.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 21.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE 22.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 23.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 24.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 25.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 26.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 27.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 28.\" SUCH DAMAGE. 29.\" 30.\" from: @(#)crash.8 6.5 (Berkeley) 4/20/91 31.\" 32.Dd $Mdocdate: September 11 2022 $ 33.Dt CRASH 8 34.Os 35.Sh NAME 36.Nm crash 37.Nd system failure and diagnosis 38.Sh DESCRIPTION 39This section explains what happens when the system crashes 40and (very briefly) how to analyze crash dumps. 41.Pp 42When the system crashes voluntarily it prints a message of the form 43.Bd -literal -offset indent 44panic: why i gave up the ghost 45.Ed 46.Pp 47on the console and enters the kernel debugger, 48.Xr ddb 4 . 49.Pp 50If you wish to report this panic, you should include the output of 51the 52.Ic ps 53and 54.Ic trace 55commands. 56Unless the 57.Sq ddb.log 58sysctl has been disabled, anything output to screen will be 59appended to the system message buffer, from where it may be 60possible to retrieve it through the 61.Xr dmesg 8 62command after a warm reboot. 63If the debugger command 64.Ic boot dump 65is entered, or if the debugger was not compiled into the kernel, or 66the debugger was disabled with 67.Xr sysctl 8 , 68then the system dumps the contents of physical memory 69onto a mass storage peripheral device. 70The particular device used is determined by the 71.Sq dumps on 72directive in the 73.Xr config 8 74file used to build the kernel. 75.Pp 76After the dump has been written, the system then 77invokes the automatic reboot procedure as 78described in 79.Xr reboot 8 . 80If auto-reboot is disabled (in a machine dependent way), the system 81will simply halt at this point. 82.Pp 83Upon rebooting, and 84unless some unexpected inconsistency is encountered in the state 85of the file systems due to hardware or software failure, the system 86will copy the previously written dump into 87.Pa /var/crash 88using 89.Xr savecore 8 , 90before resuming multi-user operations. 91.Ss Causes of system failure 92The system has a large number of internal consistency checks; if one 93of these fails, then it will panic with a very short message indicating 94which one failed. 95In many instances, this will be the name of the routine which detected 96the error, or a two-word description of the inconsistency. 97A full understanding of most panic messages requires perusal of the 98source code for the system. 99.Pp 100The most common cause of system failures is hardware failure 101.Pq e.g., bad memory 102which 103can reflect itself in different ways. 104Here are the messages which are most likely, with some hints as to causes. 105Left unstated in all cases is the possibility that a hardware or software 106error produced the message in some unexpected way. 107.Bl -tag -width indent 108.It no init 109This panic message indicates filesystem problems, and reboots are likely 110to be futile. 111Late in the bootstrap procedure, the system was unable to 112locate and execute the initialization process, 113.Xr init 8 . 114The root filesystem is incorrect or has been corrupted, or the mode 115or type of 116.Pa /sbin/init 117forbids execution. 118.It trap type %d, code=%x, pc=%x 119A unexpected trap has occurred within the system; the trap types are 120machine dependent and can be found listed in 121.Pa /sys/arch/ARCH/include/trap.h . 122.Pp 123The code is the referenced address, and the pc is the program counter at the 124time of the fault is printed. 125Hardware flakiness will sometimes generate this panic, but if the cause 126is a kernel bug, 127the kernel debugger 128.Xr ddb 4 129can be used to locate the instruction and subroutine inside the kernel 130corresponding 131to the PC value. 132If that is insufficient to suggest the nature of the problem, 133more detailed examination of the system status at the time of the trap 134usually can produce an explanation. 135.It init died 136The system initialization process has exited. 137This is bad news, as no new users will then be able to log in. 138Rebooting is the only fix, so the system just does it right away. 139.It out of mbufs: map full 140The network has exhausted its private page map for network buffers. 141This usually indicates that buffers are being lost, and rather than 142allow the system to slowly degrade, it reboots immediately. 143The map may be made larger if necessary. 144.El 145.Pp 146That completes the list of panic types you are likely to see. 147.Ss Analyzing a dump 148When the system crashes it writes (or at least attempts to write) 149an image of memory, including the kernel image, onto the dump device. 150On reboot, the kernel image and memory image are separated and preserved in 151the directory 152.Pa /var/crash . 153.Pp 154To analyze the kernel and memory images preserved as 155.Pa bsd.0 156and 157.Pa bsd.0.core , 158you should run 159.Xr gdb 1 , 160loading in the images with the following commands: 161.Bd -literal -offset indent 162# gdb 163GNU gdb 6.3 164Copyright 2004 Free Software Foundation, Inc. 165GDB is free software, covered by the GNU General Public License, and you are 166welcome to change it and/or distribute copies of it under certain conditions. 167Type "show copying" to see the conditions. 168There is absolutely no warranty for GDB. Type "show warranty" for details. 169This GDB was configured as "i386-unknown-openbsd4.6". 170(gdb) file /var/crash/bsd.0 171Reading symbols from /var/crash/bsd.0...(no debugging symbols found)...done. 172(gdb) target kvm /var/crash/bsd.0.core 173.Ed 174.Pp 175[Note that the 176.Dq kvm 177target is currently only supported by 178.Xr gdb 1 179on some architectures.] 180.Pp 181After this, you can use the 182.Ic where 183command to show trace of procedure calls that led to the crash. 184.Pp 185For custom-built kernels, you should use 186.Pa bsd.gdb 187instead of 188.Pa bsd , 189thus allowing 190.Xr gdb 1 191to show symbolic names for addresses and line numbers from the source. 192.Pp 193Analyzing saved system images is sometimes called post-mortem debugging. 194There are a class of analysis tools designed to work on 195both live systems and saved images, most of them are linked with the 196.Xr kvm 3 197library and share option flags to specify the kernel and memory image. 198These tools typically take the following flags: 199.Bl -tag -width indent 200.It Fl M Ar core 201Normally this 202.Ar core 203is an image produced by 204.Xr savecore 8 205but it can be 206.Pa /dev/mem 207too, if you are looking at the live system. 208.It Fl N Ar system 209Takes a kernel 210.Ar system 211image as an argument. 212This is where the symbolic information is gotten from, 213which means the image cannot be stripped. 214In some cases, using a 215.Pa bsd.gdb 216version of the kernel can assist even more. 217.El 218.Pp 219The following commands understand these options: 220.Xr fstat 1 , 221.Xr netstat 1 , 222.Xr nfsstat 1 , 223.Xr ps 1 , 224.Xr w 1 , 225.Xr dmesg 8 , 226.Xr iostat 8 , 227.Xr kgmon 8 , 228.Xr pstat 8 , 229.Xr trpt 8 , 230.Xr vmstat 8 231and many others. 232There are exceptions, however. 233For instance, 234.Xr ipcs 1 235has renamed the 236.Fl M 237argument to be 238.Fl C 239instead. 240.Pp 241Examples of use: 242.Bd -literal -offset indent 243# ps -N /var/crash/bsd.0 -M /var/crash/bsd.0.core -O paddr 244.Ed 245.Pp 246The 247.Fl O Ar paddr 248option prints each process' 249.Vt struct proc 250address. 251This is very useful information if you are analyzing process contexts in 252.Xr gdb 1 . 253.Bd -literal -offset indent 254# vmstat -N /var/crash/bsd.0 -M /var/crash/bsd.0.core -m 255.Ed 256.Pp 257This analyzes memory allocations at the time of the crash. 258Perhaps some resource was starving the system? 259.Ss Analyzing a live kernel 260Like the tools mentioned above, 261.Xr gdb 1 262can be used to analyze a live system as well. 263This can be accomplished by not specifying a crash dump when selecting the 264.Dq kvm 265target: 266.Bd -literal -offset indent 267(gdb) target kvm 268.Ed 269.Pp 270It is possible to inspect processes that entered the kernel by 271specifying a process' 272.Vt struct proc 273address to the 274.Ic kvm proc 275command: 276.Bd -literal -offset indent 277(gdb) kvm proc 0xd69dada0 278#0 0xd0355d91 in sleep_finish (sls=0x0, do_sleep=0) 279 at ../../../../kern/kern_synch.c:217 280217 mi_switch(); 281.Ed 282.Pp 283After this, the 284.Ic where 285command will show a trace of procedure calls, right back to where the 286selected process entered the kernel. 287.Sh CRASH LOCATION DETERMINATION 288The following example should make it easier for a novice kernel 289developer to find out where the kernel crashed. 290.Pp 291First, in 292.Xr ddb 4 293find the function that caused the crash. 294It is either the function at the top of the traceback or the function 295under the call to 296.Fn panic 297or 298.Fn uvm_fault . 299.Pp 300The point of the crash usually looks something like this "function+0x4711". 301.Pp 302Find the function in the sources, let's say that the function is in "foo.c". 303.Pp 304Go to the kernel build directory, e.g., 305.Pa /sys/arch/ARCH/compile/GENERIC , 306and do the following: 307.Bd -literal -offset indent 308# objdump -S foo.o | less 309.Ed 310.Pp 311Find the function in the output. 312The function will look something like this: 313.Bd -literal -offset indent 3140: 17 47 11 42 foo %x, bar, %y 3154: foo bar allan %kaka 3168: XXXX boink %bloyt 317etc. 318.Ed 319.Pp 320The first number is the offset. 321Find the offset that you got in the ddb trace 322(in this case it's 4711). 323.Pp 324When reporting data collected in this way, include ~20 lines before and ~10 325lines after the offset from the objdump output in the crash report, as well 326as the output of 327.Xr ddb 4 Ns 's 328"show registers" command. 329It's important that the output from objdump includes at least two or 330three lines of C code. 331.Sh REPORTING 332If you are sure you have found a reproducible software bug in the kernel, 333and need help in further diagnosis, or already have a fix, use 334.Xr sendbug 1 335to send the developers a detailed description including the entire session 336from 337.Xr gdb 1 . 338.Sh SEE ALSO 339.Xr gdb 1 , 340.Xr sendbug 1 , 341.Xr ddb 4 , 342.Xr reboot 8 , 343.Xr savecore 8 344