All rights reserved. The Berkeley software License Agreement
specifies the terms and conditions for redistribution.
@(#)3.t 6.2 (Berkeley) 06/03/86
.ds RH "System Building Process
SYSTEM BUILDING PROCESS
In this section we consider the steps necessary to build a bootable system image. We assume the system source is located in the ``/sys'' directory and that, initially, the system is being configured from source code.
Under normal circumstances there are 5 steps in building a system.
Steps 1 and 2 are usually done only once. When a system configuration changes it usually suffices to just run config on the modified configuration file, rebuild the source code dependencies, and remake the system. Sometimes, however, configuration dependencies may not be noticed in which case it is necessary to clean out the relocatable object files saved in the system's directory; this will be discussed later. Creating a configuration file
Configuration files normally reside in the directory ``/sys/conf''. A configuration file is most easily constructed by copying an existing configuration file and modifying it. The 4.3BSD distribution contains a number of configuration files for machines at Berkeley; one may be suitable or, in worst case, a copy of the generic configuration file may be edited.
The configuration file must have the same name as the directory in which the configured system is to be built. Further, config assumes this directory is located in the parent directory of the directory in which it is run. For example, the generic system has a configuration file ``/sys/conf/GENERIC'' and an accompanying directory named ``/sys/GENERIC''. Although it is not required that the system sources and configuration files reside in ``/sys,'' the configuration and compilation procedure depends on the relative locations of directories within that hierarchy, as most of the system code and the files created by config use pathnames of the form ``../''. If the system files are not located in ``/sys,'' it is desirable to make a symbolic link there for use in installation of other parts of the system that share files with the kernel.
When building the configuration file, be sure to include the items described in section 2. In particular, the machine type, cpu type, timezone, system identifier, maximum users, and root device must be specified. The specification of the hardware present may take a bit of work; particularly if your hardware is configured at non-standard places (e.g. device registers located at funny places or devices not supported by the system). Section 4 of this document gives a detailed description of the configuration file syntax, section 5 explains some sample configuration files, and section 6 discusses how to add new devices to the system. If the devices to be configured are not already described in one of the existing configuration files you should check the manual pages in section 4 of the UNIX Programmers Manual. For each supported device, the manual page synopsis entry gives a sample configuration line.
Once the configuration file is complete, run it through config and look for any errors. Never try and use a system which config has complained about; the results are unpredictable. For the most part, config 's error diagnostics are self explanatory. It may be the case that the line numbers given with the error messages are off by one.
A successful run of config on your configuration file will generate a number of files in the configuration directory. These files are:
Unless you have reason to doubt config , or are curious how the system's autoconfiguration scheme works, you should never have to look at any of these files. Constructing source code dependencies
When config is done generating the files needed to compile and link your system it will terminate with a message of the form ``Don't forget to run make depend''. This is a reminder that you should change over to the configuration directory for the system just configured and type ``make depend'' to build the rules used by make to recognize interdependencies in the system source code. This will insure that any changes to a piece of the system source code will result in the proper modules being recompiled the next time make is run.
This step is particularly important if your site makes changes to the system include files. The rules generated specify which source code files are dependent on which include files. Without these rules, make will not recognize when it must rebuild modules due to the modification of a system header file. The dependency rules are generated by a pass of the C preprocessor and reflect the global system options. This step must be repeated when the configuration file is changed and config is used to regenerate the system makefile. Building the system
The makefile constructed by config should allow a new system to be rebuilt by simply typing ``make image-name''. For example, if you have named your bootable system image ``vmunix'', then ``make vmunix'' will generate a bootable image named ``vmunix''. Alternate system image names are used when the root file system location and/or swapping configuration is done in more than one way. The makefile which config creates has entry points for each system image defined in the configuration file. Thus, if you have configured ``vmunix'' to be a system with the root file system on an ``hp'' device and ``hkvmunix'' to be a system with the root file system on an ``hk'' device, then ``make vmunix hkvmunix'' will generate binary images for each. As the system will generally use the disk from which it is loaded as the root filesystem, separate system images are only required to support different swap configurations.
Note that the name of a bootable image is different from the system identifier. All bootable images are configured for the same system; only the information about the root file system and paging devices differ. (This is described in more detail in section 4.)
The last step in the system building process is to rearrange certain commonly used symbols in the symbol table of the system image; the makefile generated by config does this automatically for you. This is advantageous for programs such as netstat\|(1) and vmstat\|(1), which run much faster when the symbols they need are located at the front of the symbol table. Remember also that many programs expect the currently executing system to be named ``/vmunix''. If you install a new system and name it something other than ``/vmunix'', many programs are likely to give strange results. Sharing object modules
If you have many systems which are all built on a single machine there are at least two approaches to saving time in building system images. The best way is to have a single system image which is run on all machines. This is attractive since it minimizes disk space used and time required to rebuild systems after making changes. However, it is often the case that one or more systems will require a separately configured system image. This may be due to limited memory (building a system with many unused device drivers can be expensive), or to configuration requirements (one machine may be a development machine where disk quotas are not needed, while another is a production machine where they are), etc. In these cases it is possible for common systems to share relocatable object modules which are not configuration dependent; most of the modules in the directory ``/sys/sys'' are of this sort.
To share object modules, a generic system should be built. Then, for each system configure the system as before, but before recompiling and linking the system, type ``make links'' in the system compilation directory. This will cause the system to be searched for source modules which are safe to share between systems and generate symbolic links in the current directory to the appropriate object modules in the directory ``../GENERIC''. A shell script, ``makelinks'' is generated with this request and may be checked for correctness. The file ``/sys/conf/defines'' contains a list of symbols which we believe are safe to ignore when checking the source code for modules which may be shared. Note that this list includes the definitions used to conditionally compile in the virtual memory tracing facilities, and the trace point support used only rarely (even at Berkeley). It may be necessary to modify this file to reflect local needs. Note further that interdependencies which are not directly visible in the source code are not caught. This means that if you place per-system dependencies in an include file, they will not be recognized and the shared code may be selected in an unexpected fashion. Building profiled systems
It is simple to configure a system which will automatically collect profiling information as it operates. The profiling data may be collected with kgmon\|(8) and processed with gprof\|(1) to obtain information regarding the system's operation. Profiled systems maintain histograms of the program counter as well as the number of invocations of each routine. The gprof command will also generate a dynamic call graph of the executing system and propagate time spent in each routine along the arcs of the call graph (consult the gprof documentation for elaboration). The program counter sampling can be driven by the system clock, or if you have an alternate real time clock, this can be used. The latter is highly recommended, as use of the system clock will result in statistical anomalies, and time spent in the clock routine will not be accurately attributed.
To configure a profiled system, the -p option should be supplied to config. A profiled system is about 5-10% larger in its text space due to the calls to count the subroutine invocations. When the system executes, the profiling data is stored in a buffer which is 1.2 times the size of the text space. The overhead for running a profiled system varies; under normal load we see anywhere from 5-25% of the system time spent in the profiling code.
Note that systems configured for profiling should not be shared as described above unless all the other shared systems are also to be profiled.