You are on page 1of 19

February 2010

Master of Computer Application (MCA) –


Semester 2
MC0070 – Operating Systems with Unix
Assignment Set – 1

1. Describe the following operating system


components:
A) Process Management B) Main Memory
Management
C) File Management D) I/O System
Management

Ans –

A) Process Management

The operating system manages many kinds of activities ranging


from user programs to system programs like printer spooler, name
servers, file server etc. Each of these activities is encapsulated in a
process. A process includes the complete execution context (code, data,
PC, registers, OS resources in use etc.).

It is important to note that a process is not a program. A process is


only ONE instant of a program in execution. There are many processes
can be running the same program. The five major activities of an
operating system in regard to process management are

1. Creation and deletion of user and system processes.


2. Suspension and resumption of processes.
3. A mechanism for process synchronization.
4. A mechanism for process communication.
5. A mechanism for deadlock handling.

B) Main-Memory Management

Primary-Memory or Main-Memory is a large array of words or bytes.


Each word or byte has its own address. Main-memory provides storage
that can be access directly by the CPU. That is to say for a program to be
executed, it must in the main memory.

The major activities of an operating in regard to memory-management


are:
1. Keep track of which part of memory are currently being used and by
whom.
2. Decide which processes are loaded into memory when memory space
becomes available.
3. Allocate and de-allocate memory space as needed.

C) File Management

A file is a collection of related information defined by its creator.


Computer can store files on the disk (secondary storage), which provides
long term storage. Some examples of storage media are magnetic tape,
magnetic disk and optical disk. Each of these media has its own
properties like speed, capacity, data transfer rate and access methods.

A file system normally organized into directories to ease their use.


These directories may contain files and other directions.

The five main major activities of an operating system in regard to file


management are

1. The creation and deletion of files.


2. The creation and deletion of directions.
3. The support of primitives for manipulating files and directions.
4. The mapping of files onto secondary storage.
5. The back up of files on stable storage media.

D) I/O System Management

I/O subsystem hides the peculiarities of specific hardware devices


from the user. Only the device driver knows the peculiarities of the
specific device to whom it is assigned.

2. Describe the following:


A. Layered Approach B. Micro Kernels C. Virtual
Machines

Ans –

A) Layered Approach

With proper hardware support, operating systems can be broken into


pieces that are smaller and more appropriate than those allowed by the
original MS-DOS or UNIX systems. The operating system can then retain
much greater control over the computer and over the applications that
make use of that computer. Implementers have more freedom in
changing the inner workings of the system and in creating modular
operating systems. Under the top-down approach, the overall
functionality and features are determined and the separated into
components. Information hiding is also important, because it leaves
programmers free to implement the low-level routines as they see fit,
provided that the external interface of the routine stays unchanged and
that the routine itself performs the advertised task.

A system can be made modular in many ways. One method is the


layered approach, in which the operating system is broken up into a
number of layers (levels). The bottom layer (layer 0) id the hardware; the
highest (layer N) is the user interface.

Users
File Systems
Inter-process
Communication
I/O and Device
Management
Virtual Memory
Primitive Process
Management
Hardware

Layered Architecture

An operating-system layer is an implementation of an abstract


object made up of data and the operations that can manipulate those
data. A typical operating – system layer-say, layer M-consists of data
structures and a set of routines that can be invoked by higher-level
layers. Layer M, in turn, can invoke operations on lower-level layers.

The main advantage of the layered approach is simplicity of


construction and debugging. The layers are selected so that each uses
functions (operations) and services of only lower-level layers. This
approach simplifies debugging and system verification. The first layer
can be debugged without any concern for the rest of the system,
because, by definition, it uses only the basic hardware (which is assumed
correct) to implement its functions. Once the first layer is debugged, its
correct functioning can be assumed while the second layer is debugged,
and so on. If an error is found during debugging of a particular layer, the
error must be on that layer, because the layers below it are already
debugged. Thus, the design and implementation of the system is
simplified.

Each layer is implemented with only those operations provided by


lower-level layers. A layer does not need to know how these operations
are implemented; it needs to know only what these operations do.
Hence, each layer hides the existence of certain data structures,
operations, and hardware from higher-level layers. The major difficulty
with the layered approach involves appropriately defining the various
layers. Because layer can use only lower-level layers, careful planning is
necessary. For example, the device driver for the backing store (disk
space used by virtual-memory algorithms) must be at a lower level than
the memory-management routines, because memory management
requires the ability to use the backing store.

Other requirement may not be so obvious. The backing-store driver


would normally be above the CPU scheduler, because the driver may
need to wait for I/O and the CPU can be rescheduled during this time.
However, on a larger system, the CPU scheduler may have more
information about all the active processes than can fit in memory.
Therefore, this information may need to be swapped in and out of
memory, requiring the backing-store driver routine to be below the CPU
scheduler.

A final problem with layered implementations is that they tend to be


less efficient than other types. For instance, when a user program
executes an I/O operation, it executes a system call that is trapped to
the I/O layer, which calls the memory-management layer, which in turn
calls the CPU-scheduling layer, which is then passed to the hardware. At
each layer, the parameters may be modified; data may need to be
passed, and so on. Each layer adds overhead to the system call; the net
result is a system call that takes longer than does one on a non-layered
system. These limitations have caused a small backlash against layering
in recent years. Fewer layers with more functionality are being designed,
providing most of the advantages of modularized code while avoiding the
difficult problems of layer definition and interaction.

B) Micro-kernels

We have already seen that as UNIX expanded, the kernel became


large and difficult to manage. In the mid-1980s, researches at Carnegie
Mellon University developed an operating system called Mach that
modularized the kernel using the microkernel approach. This method
structures the operating system by removing all nonessential
components from the kernel and implementing then as system and user-
level programs. The result is a smaller kernel. There is little consensus
regarding which services should remain in the kernel and which should
be implemented in user space. Typically, however, micro-kernels provide
minimal process and memory management, in addition to a
communication facility.

Device File Client …. Virtual


Server Proces Memory
Drivers s
Microkernel
Hardware

Microkernel Architecture
The main function of the microkernel is to provide a
communication facility between the client program and the various
services that are also running in user space. Communication is provided
by message passing. For example, if the client program and service
never interact directly. Rather, they communicate indirectly by
exchanging messages with the microkernel.

On benefit of the microkernel approach is ease of extending the


operating system. All new services are added to user space and
consequently do not require modification of the kernel. When the kernel
does have to be modified, the changes tend to be fewer, because the
microkernel is a smaller kernel. The resulting operating system is easier
to port from one hardware design to another. The microkernel also
provided more security and reliability, since most services are running as
user – rather than kernel – processes, if a service fails the rest of the
operating system remains untouched.

Several contemporary operating systems have used the


microkernel approach. Tru64 UNIX (formerly Digital UNIX provides a UNIX
interface to the user, but it is implemented with a March kernel. The
March kernel maps UNIX system calls into

C) Virtual Machine

The layered approach of operating systems is taken to its logical


conclusion in the concept of virtual machine. The fundamental idea
behind a virtual machine is to abstract the hardware of a single
computer (the CPU, Memory, Disk drives, Network Interface Cards, and
so forth) into several different execution environments and thereby
creating the illusion that each separate execution environment is running
its own private computer. By using CPU Scheduling and Virtual Memory
techniques, an operating system can create the illusion that a process
has its own processor with its own (virtual) memory. Normally a process
has additional features, such as system calls and a file system, which are
not provided by the hardware. The Virtual machine approach does not
provide any such additional functionality but rather an interface that is
identical to the underlying bare hardware. Each process is provided with
a (virtual) copy of the underlying computer.

Hardware Virtual machine

The original meaning of virtual machine, sometimes called a


hardware virtual machine, is that of a number of discrete identical
execution environments on a single computer, each of which runs an
operating system (OS). This can allow applications written for one OS to
be executed on a machine which runs a different OS, or provide
execution "sandboxes" which provide a greater level of isolation between
processes than is achieved when running multiple processes on the
same instance of an OS. One use is to provide multiple users the illusion
of having an entire computer, one that is their "private" machine,
isolated from other users, all on a single physical machine. Another
advantage is that booting and restarting a virtual machine can be much
faster than with a physical machine, since it may be possible to skip
tasks such as hardware initialization.

Such software is now often referred to with the terms virtualization


and virtual servers. The host software which provides this capability is
often referred to as a virtual machine monitor or hypervisor.

Software virtualization can be done in three major ways:

· Emulation, full system simulation, or "full virtualization with dynamic


recompilation" — the virtual machine simulates the complete hardware,
allowing an unmodified OS for a completely different CPU to be run.

· Paravirtualization — the virtual machine does not simulate hardware


but instead offers a special API that requires OS modifications. An
example of this is XenSource’s XenEnterprise (www.xensource.com)

· Native virtualization and "full virtualization" — the virtual machine only


partially simulates enough hardware to allow an unmodified OS to be run
in isolation, but the guest OS must be designed for the same type of
CPU. The term native virtualization is also sometimes used to designate
that hardware assistance through Virtualization Technology is used.

Application virtual machine

Another meaning of virtual machine is a piece of computer


software that isolates the application being used by the user from the
computer. Because versions of the virtual machine are written for
various computer platforms, any application written for the virtual
machine can be operated on any of the platforms, instead of having to
produce separate versions of the application for each computer and
operating system. The application is run on the computer using an
interpreter or Just In Time compilation. One of the best known examples
of an application virtual machine is Sun Microsystem’s Java Virtual
Machine.

3. Describe the concept of Paging and Segmentation


with respect to Windows Operating System.

Ans –

Both paging and segmentation have their strengths, so they are


often combined to benefit from the advantages of both. In a combined
paging/segmentatino system, a user program is broken into a number of
segments, at the discretion of the programmer, which is in turn broken
up into a number of fixed-size pages. From the programmer’s point of
view, a logical address still consists of a segment number and a segment
offset, as before in pure segmentation, while from the system’s point of
view the segment offset is viewed as a page number and a page offset
for a page within the specified segment.

Address translation in a segmentation system

Above figure shows the address translation in the combined


scheme. Each process is associated with a segment table and a number
of page tables, one for each segment. For a running process, a register
holds the starting address of the segment table for the process.
Presented with a virtual address, the processor uses the segment
number portion to index into the segment table to find the page table for
that segment. Then the page number portion of the virtual address is
used to index the page table and look up the corresponding frame
number. It is then combined with the offset portion of the virtual address
to produce the desired real address.
Figure 1(c) suggests the segment table entry and page table entry
formats.

4. Describe the following with respect to UNIX


operating System:
A) Unix Architecture
B) Process Control
C) Environmental Variables and Shells
D) Unix Operating System Layers

Ans –

A) UNIX Architecture
System Architecture

At the center of the UNIX onion is a program called the kernel. It is


absolutely crucial to the operation of the UNIX system. The kernel
provides the essential services that make up the heart of UNIX systems;
it allocates memory, keeps track of the physical location of files on the
computer’s hard disks, loads and executes binary programs such as
shells, and schedules the task swapping without which UNIX systems
would be incapable of doing more than one thing at a time.

The kernel accomplishes all these tasks by providing an interface


between the other programs running under its control and the physical
hardware of the computer; this interface, the system call interface,
effectively insulates the other programs on the UNIX system from the
complexities of the computer. For example, when a running program
needs access to a file, it cannot simply open the file; instead it issues a
system call which asks the kernel to open the file. The kernel takes over
and handles the request, then notifies the program whether the request
succeeded or failed. To read data in from the file takes another system
call; the kernel determines whether or not the request is valid, and if it
is, the kernel reads the required block of data and passes it back to the
program. Unlike DOS (and some other operating systems), UNIX system
programs do not have access to the physical hardware of the computer.
All they see are the kernel services, provided by the system call
interface.

Although there is a well-defined, technical and commercial


standard for what constitutes “Unix,” in common usage, Unix refers to a
set of operating systems, from private vendors

and in various open-licensed versions, that act similarly from the


view of users and administrators. Within any Unix version, there are
several different “shells” which affect how commands are interpreted.
Your default is that you are using Solaris (developed by Sun
Microsystems primarily for use on hardware sold by Sun) within the “c-
shell.” Most of the basic commands here will work the same in other Unix
variants and shells, including Linux and the Mac OS X command-line
environment.

All Unix commands and file references are case sensitive: “This” is
a different filename than “this” because of the capitalization difference.
All Unix commands are lowercase and from two to nine characters long.
Many commands have options that are invoked by a hyphen followed by
one or more letters. Multiple options can often be requested by adding
multiple letters to a single hyphen. For example, ls -al combines the -a
and -l options.

A standard Unix system provides commands username, passwd,


chsh, and additional options on chdgrp to change usernames, passwords,
default groups, and shell environments.

Wildcards: * is a “wildcard” character that can refer to any


character string and ? is a wildcard character that can refer to any single
character. E.g., mv *.f95 code would move

every Fortran 95 program file on the current directory into a


subdirectory called code. Filenames: in our version of Unix, they may be
up to 255 characters, and they may include any character except the
regular slash /. (Avoid using backslashes, blank spaces, or nonprinting
characters in filenames – they are allowed but will cause problems for
you.)

A pathname beginning with / is an absolute path from the top of


the system tree. A pathname not beginning with / is a relative path down
from the current working directory.
Directory shortcuts include: ˜ as a replacement for your home
directory, ˜username as a shorthand for username’s home directory, ..
(two periods) for the subdirectory one level up from the current
directory, and . (one period) for the current directory.

B) Process Control

When you type a command at the Unix prompt, press Return , and
wait until the prompt comes back indicating the previous command is
done, then you have run a foreground process. You can only run one
foreground process at a time from one window, but Unix allows you to
run more than one process at once, some of which are in the
background.

To start a long program running (one that may take several minutes
to complete, for example) put it in the background by adding a & to the
command.

C) Environment Variables and Shells

Environment variables are an advanced topic that should be avoided in


an introductory course. A set of environment variables has been created
for you in your setup procedures that enables Unix to find the software
and libraries you need to get through this course. Only if something goes
wrong, you may be asked to issue commands like this:

How environment variables are set, used, and handled, varies with the
shell. If you intend use Unix from the command line extensively, you will
probably wish to go to one of the advanced shells that has features such
as command-line editing, history access via the arrow keys, and tab-
completion of commands.

D) UNIX Operating System Layers

The UNIX system is actually more than strictly an operating system. UNIX
includes the traditional operating system components. In addition, a
standard UNIX system includes a set of libraries and a set of applications.
Figure 1.2 shows the components and layers of UNIX. Sitting above the
hardware are two components: the file system and process control. Next
is the set of libraries. On top are the applications. The user has access to
the libraries and to the applications. These two components are what
many users think of as UNIX, because together they constitute the UNIX
interface.

The part of UNIX that manages the hardware and the executing
processes is called the kernel. In managing all hardware devices, the
UNIX system views each device as a file (called a device file). This allows
the same simple method of reading and writing files to be used to access
each hardware device. The file manages read and write access to user
data and to devices, such as printers, attached to the system. It
implements security controls to protect the safety and privacy of
information. In executing processes, the UNIX system allocates resources
(including use of the CPU) and mediates accesses to the hardware.

One important advantage that results from the UNIX standard


interface is application portability. Application portability is the ability of
a single application to be executed on various types of computer
hardware without being modified. This can be achieved if the application
uses the UNIX interface to manage its hardware needs. UNIX’s layered
design insulates the application from the different types of hardware.
This allows the software developer to support the single application on
multiple hardware types with minimal effort. The application writer has
lower development costs and a larger potential customer base. Users not
only have more applications available, but can rely on being able to use
the same applications on different computer hardware.

UNIX goes beyond the traditional operating system by providing a


standard set of libraries and applications that developers and users can
use. This standard interface allows application portability and facilitates
user familiarity with the interface.

5. Describe the following:


A) Unix File System Types
B) Mounting and Unmounting File Systems
C) Boot Procedure
Ans –

A) Unix File System Types

Initially, there were only two types of file systems-the ones from AT&T
and Berkeley. Following are some file systems types:

s5

Before SVR4, this was the only file system used by System V, but
today it is offered by SVR4 by this name for backward compatibility only.
This file system uses a logical block size of 512 or 1024 bytes and a
single super block. It also can’t handle filenames longer than 14
characters.

ufs

This is how the Berkeley fast file systems is known to SVR4 and
adopted by most UNIX systems. Because the block size here can go up to
64 KB, performance of this file system is considerably better than s5. It
uses multiple super blocks with each cylinder group storing a superblock.
Unlike s5, ufs supports 255-character filenames, symbolic links and disk
quotas.

Ext2

This is the standard file system of Linux, It uses a block size of


1024 bytes and, like ufs, uses multiple superblocks and symbolic links.

Iso9660 or hsfs

This is the standard file system used by CD-ROMs and uses DOS-
style 8+3 filenames, Since UNIX uses longer filenames, hsfs also
provides Rock Ridge extensions to accommodate them.

msdos or pcfs

Most UNIX systems also support DOS file systems. You can create
this file system on a floppy diskette and transfer files to it for use on a
windows system. Linux and Solaris can also directly access a DOS file
system in the hard disk.

swap

bfs The boot file system

This is used by SVR4 to host the boot programs and the UNIX
kernel. Users are not meant to use this file system.

proc or procfs

This can be considered a pseudo-file system maintained in


memory. It stores data of each running process and appears to contain
files. But actually contains none. Users can obtain most process
information including their PIDs. directly from here.

Fdisk

Creating Partitions

Both Linux and SCO UNIX allow a user to have multiple operating
systems on Intel Machines. It’s no wonder then that both offer the
Windows-type fdisk command to create, delete and activate partitions.
fdisk in Linux , however, operates differently from windows. The fdisk m
command shows you all its internal commands of which the following
subset should serve our purpose:

Command Action
A toggle a bootable flag N add a new partition
D delete a partition P Print partition table
L list known partition types Q Quit without saving
M print this menu W Write table to disk & exit.

mkfs : creating file systems

Now that you have created a partition, you need to create a file
system on this partition to make it usable. mkfs is used to build a Linux
file system on a device, usually a hard disk partition. The exit code
returned by mkfs is 0 on success and 1 on failure.

The file system-specific builder is searched for in a number of


directories like perhaps /sbin, /sbin/fs, /sbin/fs.d, /etc/fs, /etc (the precise
list is defined at compile time but at least contains /sbin and /sbin/fs),
and finally in the directories listed in the PATH enviroment variable.

OPTIONS

-V: Produce verbose output, including all file system-specific commands


that are executed. This is really only useful for testing.

-t fstype: Specifies the type of file system to be built. If not speci- fied,
the default file system type (currently ext2) is used. fs-options File
system-specific options to be passed to the real file system builder.
Although not guaranteed, the following options are supported by most
file system builders.

-c: Check the device for bad blocks before building the file system.

-l filename Read the bad blocks list from filename -v Produce verbose
output.

B) Mounting and Un-mounting File Systems

The file system is best visualized as a tree, rooted, as it were,


at /. /dev, /usr, and the other directories in the root directory are
branches, which may have their own branches, such as /usr/local, an so
on.

The fstab File

During the boot process, file systems listed in /etc/fstab are


automatically mounted (unless they are listed with the noauto option).
The /etc/fstab file contains a list of lines of the following format:

device /mount-point fstype options dumpfreq passno

Device: A device name (which should exist)


mount-point: A directory (which should exist), on which to mount the
file system.

fstype: The file system type to pass to mount.

options: Either rw for read-write file systems, or ro for read-only file


systems, followed by any other options that may be needed. A common
option is noauto for file systems not normally mounted during the boot
sequence.

dumpfreq: This is used by dump to determine which file systems


require dumping.

passno: This determines the order in which file systems should be


checked. File systems that should be skipped should have their passno
set to zero. The root file system (which needs to be checked before
everything else) should have its passno set to one, and other file
systems’ passno should be set to values greater than one.

The mount Command

The basic format of mount command:

# mount device mountpoint

Options:

-a: Mount all the file systems listed in /etc/fstab. Except those marked as
チ gnoauto チ h, excluded by the -t flag, or those that are already mounted.

-f: Force the mount of an unclean file system (dangerous), or forces the
revocation of write access when downgrading a file system’s mount
status from read-write to read-only.

-r: Mount the file system read-only.

-t fstype: Mount the given file system as the given file system type

-u: Update mount options on the file system.

-v: Be verbose. -w: Mount the file system read-write.

Umount: Un-mounting File Systems

Unmounting is achieved with the umount command which requires either


the file system name or the mount point as argument.

Umount /oracle
Umount /dev/hda3
Umount/dev/dsk/c0t3d0s5

Unmounting a file system is not possible if you have a file open in it.
Further, just as you can’t remove a directory unless you are placed in a
directory above it, you can’t unmount a file system unless you are placed
above it. All forms take -f to force unmounting, and -v for verbosity. -a
and -A are used to unmount all mounted file systems, -A, however, does
not attempt to unmount the root file system.

C) The Boot Procedure

Bootstrapping is the process of starting up a computer from a halted


or powered-down condition. When the computer is switched on, it
activates the memory-resident code which resides on the CPU board. The
normal facilities of the operating system are not available at this stage
and the computer must ‘pull itself up by its own boot-straps’ so to speak.
This procedure therefore is often referred to as bootstrapping, also
known as cold boot. The bootstrap procedure is very hardware
dependent, it typically consists of the following steps:

The memory-resident code


Runs self-test.
Probes bus for the boot device
Reads the boot program from the boot device.
Boot program reads in the kernel and passes control to it.
Kernel identifies and configures the devices.
Initializes the system and starts the system processes.
Brings up the system in single-user mode (if necessary).
Runs the appropriate startup scripts.
Brings up the system for multi-user operation.

6. Explain the following with respect to


Interprocess communication in Unix:
A) Communication via pipes B) Named
Pipes
C) Message Queues D) Message
Structure

Ans –

A) Communications Via Pipes

Once we got our processes to run, we suddenly realize that they


cannot communicate. One of the mechanisms that allow related-
processes to communicate is the pipe, or the anonymous pipe.
A pipe is a one-way mechanism that allows two related processes (i.e.
one is an ancestor of the other) to send a byte stream from one of them
to the other one.

If we want a two-way communication, we’ll need two pipes. The


system assures us of one thing: The order in which data is written to the
pipe, is the same order as that in which data is read from the pipe. The
system also assures that data won’t get lost in the middle, unless one of
the processes (the sender or the receiver) exits prematurely.

B) Named Pipe

A named pipe (also called a named FIFO, or just FIFO) is a pipe whose
access point is a file kept on the file system.

By opening this file for reading, a process gets access to the reading end
of the pipe.

By opening the file for writing, the process gets access to the writing end
of the pipe.

If a process opens the file for reading, it is blocked until another process
opens the file for writing. The same goes the other way around.

Creating A Named Pipe

A named pipe may be created either via the ‘mknod’ (or its newer
replacement, ‘mkfifo’), or via the mknod() system call

To create a named pipe with the file named ‘prog_pipe’, we can


use the following command:

mknod prog_pipe p

We could also provide a full path to where we want the named pipe
created. If we then type ‘ls -l prog_pipe’, we will see something like this:

prw-rw-r– 1 user1 0 Nov 7 01:59 prog_pipe

The ‘p’ on the first column denotes this is a named pipe. Just like any
file in the system, it has access permissions, that define which users may
open the named pipe, and whether for reading, writing or both.

C) Message Queues

A message queue is a queue onto which messages can be placed. A


message is composed of a message type (which is a number), and
message data.
A message queue can be either private, or public. If it is private, it can
be accessed only by its creating process or child processes of that
creator. If it’s public, it can be accessed by any process that knows the
queue’s key.

Several processes may write messages onto a message queue, or


read messages from the queue. Messages may be read by type, and thus
not have to be read in a FIFO order as is the case with pipes.

Creating A Message Queue – msgget()

In order to use a message queue, it has to be created first. The


msgget() system call is used to do just that. This system call accepts two
parameters – a queue key, and flags. The key may be one of:

IPC_PRIVATE – used to create a private message queue.

a positive integer – used to create (or access) a publicly-accessible


message queue.

The second parameter contains flags that control how the system
call is to be processed. It may contain flags like IPC_CREAT or IPC_EXCL
and it also contains access permission bits.

Example of a code that creates a private message queue:

#include <stdio.h> /* standard I/O routines. */


#include <sys/types.h> /* standard system data types.
#include <sys/ipc.h> /* common system V IPC structures. */
#include <sys/msg.h> /* message-queue specific functions. */
int queue_id = msgget(IPC_PRIVATE, 0600); // octal number.
if (queue_id == -1) { perror("msgget"); exit(1); }

1. the system call returns an integer identifying the created queue. Later
on we can use this key in order to access the queue for reading and
writing messages.

2. The queue created belongs to the user whose process created the
queue. Thus, since the permission bits are ‘0600′, only processes run on
behalf of this user will have access to the queue.

D) The Message Structure – struct msgbuf

Before we go to writing messages to the queue or reading


messages from it, we need to see how a message looks. The system
defines a structure named ‘msgbuf’ for this purpose. Here is how it is
defined:

struct msgbuf {
long mtype; /* message type, a positive number (cannot be
zero). */
char mtext[1]; /* message body array. usually larger than
one byte. */
};
Lets create an "hello world" message:
/* first, define the message string */
char* msg_text = "hello world";
/* allocate a message with enough space for length of string and */ /* one
extra byte for the terminating null character. */
struct msgbuf* msg = (struct msgbuf*)malloc(sizeof(struct msgbuf) +
strlen(msg_text));
/* set the message type. for example – set it to ‘1′. */
msg->mtype = 1; /* finally, place the "hello world" string inside the
message. */
strcpy(msg->mtext, msg_text);

Writing Messages Onto A Queue – msgsnd()

Once we created the message queue, and a message structure, we can


place it on the message queue, using the msgsnd() system call. This
system call copies our message structure and places that as the last
message on the queue. It takes the following parameters:

int msqid – id of message queue, as returned from the msgget() call.


struct msgbuf* msg – a pointer to a properly initializes message
structure, such as the one we prepared in the previous section.
int msgsz – the size of the data part (mtext) of the message, in bytes.
int msgflg – flags specifying how to send the message. may be a logical
"or" of the following:
IPC_NOWAIT – if the message cannot be sent immediately, without
blocking the process, return ‘-1′, and set errno to EAGAIN.
to set no flags, use the value ‘0′.
in order to send our message on the queue, we’ll use msgsnd() like this:
int rc = msgsnd(queue_id, msg, strlen(msg_text)+1, 0);
if (rc == -1) {
perror("msgsnd");
exit(1);
}

Reading A Message From The Queue – msgrcv()

We may use the system call msgrcv() In order to read a message from a
message queue. This system call accepts the following list of
parameters:

int msqid – id of the queue, as returned from msgget().


struct msgbuf* msg – a pointer to a pre-allocated msgbuf structure. It
should generally be large enough to contain a message with some
arbitrary data (see more below).

int msgsz – size of largest message text we wish to receive. Must NOT be
larger than the amount of space we allocated for the message text in
‘msg’.

int msgtyp – Type of message we wish to read. may be one of:

0 – The first message on the queue will be returned.

a positive integer – the first message on the queue whose type (mtype)
equals this integer (unless a certain flag is set in msgflg, see below).

a negative integer – the first message on the queue whose type is less
than or equal to the absolute value of this integer.

int msgflg – a logical ‘or’ combination of any of the following flags:

IPC_NOWAIT – if there is no message on the queue matching what we


want to read, return ‘-1′, and set errno to ENOMSG.

MSG_EXCEPT – if the message type parameter is a positive integer, then


return the first message whose type is NOT equal to the given integer.

Lets then try to read our message from the message queue:

/* message structure large enough to read our "hello world". */

struct msgbuf* recv_msg = (struct msgbuf*)malloc(sizeof(struct msgbuf)


+strlen("hello world")); /*

use msgrcv() to read the message. We agree to get any type, and thus */
/* use ‘0′ in the message type parameter, and use no flags (0). */

int rc = msgrcv(queue_id, recv_msg, strlen("hello world")+1, 0, 0);

if (rc == -1) { perror("msgrcv"); exit(1); }

You might also like