You are on page 1of 8

---------------------------------------------------------------------------------------------------------------Name:-Kadam Prashant nanabhau Rollno.61 T.E.

Comp(A) Batch:-C

----------------------------------------------------------------------------------------------------------------

Kernel:A kernel is a central component of an operating system. It acts as an interface between the user applications and the hardware. The sole aim of the kernel is to manage the communication between the software (user level applications) and the hardware (CPU, disk memory etc). The main tasks of the kernel are :

Process management Device management Memory management Interrupt handling I/O communication File system...etc..

LINUX is Kernel or OS?


Well, there is a difference between kernel and OS. Kernel as described above is the heart of OS which manages the core features of an OS while if some useful applications and utilities are added over the kernel, then the complete package becomes an OS. So, it can easily be said that an operating system consists of a kernel space and a user space. So, we can say that Linux is a kernel as it does not include applications like file-system utilities, windowing systems and graphical desktops, system administrator commands, text editors, compilers etc. So, various companies add these kind of applications over linux kernel and provide their operating system like ubuntu, suse, centOS, redHat etc.

Types Of Kernels:Kernels may be classified mainly in two categories 1. Monolithic 2. Micro Kernel

1. Monolithic Kernels:Earlier in this type of kernel architecture, all the basic system services like process and memory management, interrupt handling etc were packaged into a single module in kernel space. This type of architecture led to some serious drawbacks like 1) Size of kernel, which was huge. 2)Poor maintainability, which means bug fixing or addition of new features resulted in recompilation of the whole kernel which could consume hours In a modern day approach to monolithic architecture, the kernel consists of different modules which can be dynamically loaded and un-loaded. This modular approach allows easy extension of OS's capabilities. With this approach, maintainability of kernel became very easy as only the concerned module needs to be loaded and unloaded every time there is a change or bug fix in a particular module. So, there is no need to bring down and recompile the whole kernel for a smallest bit of change. Also, stripping of kernel for various platforms (say for embedded devices etc) became very easy as we can easily unload the module that we do not want. Linux follows the monolithic modular approach

2. Microkernels:This architecture majorly caters to the problem of ever growing size of kernel code which we could not control in the monolithic approach. This architecture allows some basic services like device driver management, protocol stack, file system etc to run in user space. This reduces the kernel code size and also increases the security and stability of OS as we have the bare minimum code running in kernel. So, if suppose a basic service like network service crashes due to buffer overflow, then only the networking service's memory would be corrupted, leaving the rest of the system still functional. In this architecture, all the basic OS services which are made part of user space are made to run as servers which are used by other programs in the system through inter

process communication (IPC). eg: we have servers for device drivers, network protocol stacks, file systems, graphics, etc. Microkernel servers are essentially daemon programs like any others, except that the kernel grants some of them privileges to interact with parts of physical memory that are otherwise off limits to most programs. This allows some servers, particularly device drivers, to interact directly with hardware. These servers are started at the system start-up. So, what the bare minimum that microKernel architecture recommends in kernel space?

Managing memory protection Process scheduling Inter Process communication (IPC)

Apart from the above, all other basic services can be made part of user space and can be run in the form of servers. Configuring and Compiling:Once equipped with the necessary tools, we need to run commands to have the work done. First of all, the source file must be decompressed. While I should not dig in such details but refer to the relevant README file, the README itself is inside the compressed package, so we have a cicken-and-egg problem. To uncompress and untar, usually ``tar xvzf linux-2.2.10.tar.gz'' or anything similar will work. Most of the times you'll start from the pristine tar file (as described later, in the "Upgrading" section). If you install the source package you reveived within your own distribution, you'll find the source already installed in /usr/src, either as a linux-2.2.x directory or as a .tar.gz file, depending on the distribution you are using; in the latter case you should uncompress the file as stated above. Once you have the linux-2.2.x directory, the README file in there will be a good resource to get started. Compiling from fresh source code is performed in four steps:

``make config'': the command performs the task of configuring the system. It will ask questions about whether each feature should be compiled in the kernel, made available as

a module, or just discarded. The number of questions ranges from about 50 to a few hundreds (many questions just enable or disable further questions). Every question comes with associated help information, and the whole help file is more than 500k worth of data. The figures are huge, but don't be discouraged: there are alternatives to ``make config'' and I'll introduce them in a while.

``make depend'' instructs the system to check what files depend on what other files. This is very important in order to get a correct compiled image when you change your configuration. It is so important that it is automatically invoked by make if you forget about it.

``make bzImage'': this is the actual compilation step. It builds a bootable image file. If you are not running a PC but some other computer architecture, the proper compilation command will most likely be ``make boot'' instead. (``make boot'' for the PC is currently equivalent to ``make zImage'', but that is not usually what you want. More on this later).

``make modules'': if you chose to compile some features as modules, you'll need to build them this way. When it's done you can ``make modules_install'' to install them in the default place where kmod and other tools will look for them (i.e., they get installed into /lib/modules). Also, if you compiled modules for the kernel that is already running (if, for example you forgot a device and just added it to the configuration), you also need to invoke ``depmod -a''. I won't deal with the details of modularization here, as it's a whole topic of its own and the interested reader can refer to the relevant documentation.

The most important step, the one where user interaction is required, is the configuration one. Since the huge amount of kernel parameters makes inpractical the use of the standard ``make config'' tool, two alternatives and a shortcut have been introduced while the kernel was growing. ``make menuconfig'' uses a text-terminal pseudo-graphic interface; the tool employs the usual keys (arrows, Enter, Tab, ...) to toggle checkbuttons and similar widgets. Although menuconfig includes its own help screen (and the help associated to each question as well), I don't feel it immediately useable unless you already trained yourself with ``make config''. No extra package is needed by menuconfig because the relevant configuration tool is included in the kernel source itself; for this reason, the first step of ``make menuconfig'' is compiling itself. Figure 2 shows an xterm running the main table of menuconfig.

Figure 1: menuconfig running on xterm

``make xconfig'' creates a graphical interface to kernel questions. It is definitely the most friendly tool, but you'll need both Tcl/Tk installed and an X server currently running. Figure 3 is a screenshot of such configurator. (Don't be scared by the unfamiliar appearence of the desktop, I don't like default settings and run a different window manager than most people). ``make oldconfig'', the shortcut, works exactly like ``make config'' but limits user interaction to asking new questions, those that you didn't answer last time you run one of the three configurators. Even though both text-based configurators mark new questions as (NEW), the only easy way to only deal with new questions is running ``make oldconfig''. The defualt reply for new question is ``No'', independent of what configurator you run. Independently of the configuration tool being run, the output is saved as instruction files for both make and the C code. In the toplevel source directory you'll find .config, the file used by make; the autoconf.h C header will be dropped to include/linux.

The input files for the configurators are:

.config: the current configuration is used as input the next time you configure the system. Therefore, whenever you reconfigure a kernel you only need to edit the items you want to change, without re-reading all of the questions to see if they match your needs.

arch/i386/defconfig: the default configuration is used when no .config exists. I suspect the file represent Linus' own configuration. For other architectures use the proper platform name instead of i386.

arch/i386/config.in: the questions to ask. The file includes other files (usually called Config.in) that reside in other directories, which are shared by several hardware architectures.

Based on this information, the best way to deal with kernel configuration, in my opinion, is using ``make xconfig'' the first time you are at it, and ``vi .config; make oldconfig'' for later refinements. For example, if you want to enable network multicast in a kernel you already configured and compiled, the easiest way to accomplish the task is removing the relevant line from .config and answering the question by making oldconfig. While you could edit .config (instead of just deleting lines), you'd need to make oldconfig anyways, in order to synchronize the C headers (and, sometimes, being asked new questions that depended on the ones you changed). Booting your kernel:When compilation is over, you'll find a bootable image somewhere in the directory tree. If you are Intel-based, the file will be either arch/i386/boot/zImage or arch/i386/boot/bzImage, according to the commandline you passed to make. The bootable image is left down the directory tree, inside arch, because it is strictly architecture specific. Compilation also produces a ``platform independent'' file, which is called vmlinux and lives in the toplevel source directory. ``vmlinux'' is called according to the traditional Unix name vmunix, short for ``virtual-memory unix'' (the first versions didn't have virtual-memory support, and were called just unix). While vmlinux is the real executable file that is run by the system processor (and thus you'll need it for debugging, if you ever enter the field), the booting process needs something different,

mainly as a consequence of the 640kB limit that Intel processors still experience at boot time. Most other platforms supported by Linux don't need such extra processing and are able to boot vmlinux without further massaging. The zImage file (zipped image) is a self-extracting compressed file; it is loaded into low memory (the first 64kB) and then uncompressed to high memory after the system is brought to protected mode. A zImage file bigger than about half a meg cannot be booted because it doesn't fit into low memory. The bzImage is a ``big zImage'' in that it can be bigger than half a megabyte. It gets loaded directly into high memory by using a special BIOS call, so a bzImage has no size limits, as long as it fits into the target computer. In order to boot your newly compiled kernel, you need to arrange for the BIOS to find it. This can be done in two ways: either by dumping the image to a floppy (``cat bzImage ">" /dev/fd0'' should work, but check man rdev if you have problems in booting), or by handing the image to Lilo, the standard Linux loader (this is the preferred way, as it is much faster). Lilo is configured by the text file /etc/lilo.conf; it can load one of several kernels, if you configure them in lilo.conf at the same time; the user can interactively choose which one to boot. Interaction can be password-protected if you are concerned about security, but this is outside of our scope. To add a new image to /etc/lilo.conf you just need to add a ``stanza'' to the file, and the stanza should describe the new image. At boot time you'll be able to interact with Lilo and type in the name of your chosen image. One important thing to remember about Lilo is that it builds a table of physical disk blocks, and that very table is used to ask the BIOS to load data to memory. Therefore, whenever you overwrite a file used by Lilo, you need to re-run /sbin/lilo (and you need to be root to do that). One important thing to remember about custom kernels is that it's easy to make errors; as a precaustion, you should always keep the previous working image in your lilo.conf, or your system won't be bootable any more without resorting to the original installazion media. While you you won't need to reinstall the system, rescue disks are much slower and less friendly than is using the previous kernel you were running.

A sample /etc/lilo.conf, is reproduced in this page. Yours will be different, but you don't need to deal with the details unless you are interested in them, the only needed step is copying the ``image'' stanza to create another boot choice. # LILO configuration file # global section: boot from the MBR # and delay 50 tenth of second boot = /dev/hda delay = 50 # First image, the custom one image = /zImage.2.2.10 root = /dev/hda1 label = Linux read-only # Then, the installation one image = /boot/vmlinux-2.0.36 root = /dev/hda1 label = debian read-only

You might also like