Professional Documents
Culture Documents
2008-2009
Semester: VI
Practical Lists
3) Study and implement memory management algorithm (Best fit and First Fit)
THEORY:-
• The process that requested the CPU first is allocated the CPU first.
• FCFS policy is easily implemented with a FIFO queue.
• When a process enters into the ready queue, its PCB is linked onto the tail of the
queue.
• The average waiting time under the FCFS policy is often quite long.
• The FCFS scheduling algorithm is NON-PREEMPTIVE.
• Once the CPU has been allocated to a process, that process keep the CPU until it
releases the CPU, either by terminating or requesting I/O.
• The FCFS algorithm is particularly troublesome for time sharing systems.
• There is a convoy affect as all the other process wait for one big process to get off
the CPU.
EXAMPLE:-
PROCESS BURST TIME
P1 24
P2 03
P3 03
GANTT CHART:-
P1 P2 P3
0 24 27 30
• When CPU is available it is assigned to the process that has the smallest next
CPU burst.
• FCFS scheduling is used to break tie.
• SJF scheduling algorithm is optimal since it gives minimum average waiting time
for a given set of process.
• Difficulty with SJF is knowing the length of the next CPU request.
• SJF algorithm may be either PREEMPTIVE or NON-PREEMPTIVE.
• When a new process arrive at the ready queue while a previous process is
executing the new process may have shorter next CPU burst than what is left of
the currently executing process to finish its CPU burst.
• PREEMPTIVE SJF scheduling is also called as shortest remaining time first
scheduling
• Although the SJF algorithm is optimal it can not be implemented at the level of
short term CPU scheduling.
• SJF scheduling is used frequently in long term scheduling where time limit of the
process can be used as length of he process that the user specifies while
substituting the job.
• Since there is no way to know the length of next CPU burst we try to predict its
value.
• We expect that the next CPU burst will be similar in length to the previous ones.
• We compute an appropriate length of next CPU burst and pick the process with
the shortest predicted CPU burst is generally predicted as an exponential average
of the measured length of previous CPU burst
• the parameter α control the relative weight of recent and past history in our
prediction .if α=0,then Tn+1=Tn and only the most recent CPU burst matters.if
α=1/2,recent history and past history are equally weighted.
EXAMPLE:-
PROCESS BURST TIME
P1 06
P2 08
P3 07
P4 03
GANTT CHART:-
P4 P2 P3 P2
0 3 9 16 24
• a priority is associated with each process and the CPU is allocated to the process
with the highest priority .equal priority process are scheduled in FCFS order
• priority are generally some fixed range of numbers such as 0 to7,0 to
4095etc.some system use low number to represent low priority
• priority can be defined internally, for eg. Time limit, memory requirements, the
no. of open files and the ratio of average. I/O burst to avg. CPU burst etc. have
been used to compute priority.
• Priority can be defined externally, by using criteria such as importance of the
process, the type and amount of fund being paid for computer use, the department
sponsoring the work, political factor etc.
• Priority scheduling can be either PREEMPTIVE OR NON-PREEMPTIVE.
• A PREEMPTIVE priority scheduling algorithm will preempt the CPU if the
priority of the newly arrived process is higher than the priority of the currently
running process.
• A NON-PREEMPTIVE priority scheduling algorithm will simply put the new
process with higher priority than that of the currently running process at the head
of ready queue.
• A major problem with priority scheduling algorithm is indefinite blocking or
starvation.
• A process that is ready to run but the CPU can be considered blocked waiting for
the CPU.
• A priority scheduling algorithm can leave some some low priority process waiting
indefinitely for the CPU. in heavily loaded computer system, a steady stream of
higher priority processes can prevent a low priority process from ever getting the
CPU.
• A solution to the problem of indefinite blockage of low priority processes is aging
• Aging is a technique of gradually increasing the priority of processes that wait in
the system for a long time.
•
EXAMPLE:-
PROCESS BURST TIME PRIORITY
P1 10 03
P2 01 01
P3 02 04
P4 01 05
P5 05 02
GANTT CHART:-
P2 P5 P1 P3 P4
0 1 6 16 18 19
EXAMPLE:-
PROCESS BURST TIME
P1 24
P2 03
P3 03
GANTT CHART:-
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
THEORY:-
In a computer operating system that utilizes paging for virtual memory memory
management, page replacement algorithms decide which memory pages to page out
(swap out, write to disk) when a page of memory needs to be allocated. Paging happens
when a page fault occurs and a free page cannot be used to satisfy the allocation, either
because there are none, or because the number of free pages is lower than some
threshold.
When the page that was selected for replacement and paged out is referenced again it has
to be paged in (read in from disk), and this involves waiting for I/O completion. This
determines the quality of the page replacement algorithm: the less time waiting for page-
ins, the better the algorithm. A page replacement algorithm looks at the limited
information about accesses to the pages provided by hardware, and tries to guess which
pages should be replaced to minimize the total number of page misses, while balancing
this with the costs (primary storage and processor time) of the algorithm itself.
When a process incurs a page fault, a local page replacement algorithm selects for
replacement some page that belongs to that same process (or a group of processes sharing
a memory partition). A global replacement algorithm is free to select any page in
memory.
Local page replacement assumes some form of memory partitioning that determines how
many pages are to be assigned to a given process or a group of processes. Most popular
forms of partitioning are fixed partitioning and balanced set algorithms based on the
working set model. The advantage of local page replacement is its scalability: each
process can handle its page faults independently without contending for some shared
global data structure.
FIRST-IN, FIRST-OUT
The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that
requires little book-keeping on the part of the operating system. The idea is obvious from
the name - the operating system keeps track of all the pages in memory in a queue, with
the most recent arrival at the back, and the earliest arrival in front. When a page needs to
be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is
cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in
its unmodified form. This algorithm experiences Belady's anomaly. Belady's anomaly
proves that it is possible to have more page faults when increasing the number of page
frames while using FIFO method of frame management.
FIFO page replacement algorithm is used by the VAX/VMS operating system, with some
modifications. Partial second chance is provided by skipping a limited number of entries
with valid translation table references, and additionally, pages are displaced from process
working set to a system wide pool from which they can be recovered if not already re-
used.
LRU works on the idea that pages that have been most heavily used in the past few
instructions are most likely to be used heavily in the next few instructions too. While
LRU can provide near-optimal performance in theory (almost as good as Adaptive
Replacement Cache), it is rather expensive to implement in practice. There are a few
implementation methods for this algorithm that try to reduce the cost yet keep as much of
the performance as possible.
The most expensive method is the linked list method, which uses a linked list containing
all the pages in memory. At the back of this list is the least recently used page, and at the
front is the most recently used page. The cost of this implementation lies in the fact that
items in the list will have to be moved about every memory reference, which is a very
time-consuming process.
Another method that requires hardware support is as follows: suppose the hardware has a
64-bit counter that is incremented at every instruction. Whenever a page is accessed, it
gains a value equal to the counter at the time of page access. Whenever a page needs to
be replaced, the operating system selects the page with the lowest counter and swaps it
out. With present hardware, this is not feasible because the required hardware counters do
not exist.
Because of implementation costs, one may consider algorithms (like those that follow)
that are similar to LRU, but which offer cheaper implementations.
On the other hand, LRU's weakness is that its performance tends to degenerate under
many quite common reference patterns. For example, if there are N pages in the LRU
pool, an application executing a loop over array of N + 1 pages will cause a page fault on
each and every access. As loops over large arrays are common, much effort has been put
into modifying LRU to work better in such situations. Many of the proposed LRU
modifications try to detect looping reference patterns and to switch into suitable
replacement algorithm, like Most Recently Used (MRU).
THEORY:-
Memory Manager
In an environment that supports dynamic memory allocation, the memory manager must
keep a record of the usage of each allocatable block of memory. This record could be
kept by using almost any data structure that implements linked lists. An obvious
implementation is to define a free list of block descriptors, with each descriport containing
a pointer to the next descriptor, a pointer to the block, and the length of the block. The
memory manager keeps a free list pointer and inserts entries into the list in some order
conducive to its allocation strategy. A number of strategies are used to allocate space to
the processes that are competing for memory.
o Best Fit
The allocator places a process in the smallest block of unallocated memory in
which it will fit.
Problems:
It requires an expensive search of the entire free list to find the best hole.
More importantly, it leads to the creation of lots of little holes that are not
big enough to satisfy any requests. This situation is called fragmentation,
and is a problem for all memory-management strategies, although it is
particularly bad for best-fit.
Solution:One way to avoid making little holes is to give the client a bigger block
than it asked for. For example, we might round all requests up to the next larger
multiple of 64 bytes. That doesn't make the fragmentation go away, it just hides
it.
o Worst Fit
The memory manager places process in the largest block of unallocated memory
available. The ides is that this placement will create the largest hole after the
allocations, thus increasing the possibility that, compared to best fit, another
process can use the hole created as a result of external fragmentation.
o First Fit
Another strategy is first fit, which simply scans the free list until a large enough
hole is found. Despite the name, first-fit is generally better than best-fit because it
leads to less fragmentation.
Problems:
Small holes tend to accumulate near the beginning of the free list,
making the memory allocator search farther and farther each time.
Solution:
Next Fit
o Next Fit
The first fit approach tends to fragment the blocks near the beginning of the list
without considering blocks further down the list. Next fit is a variant of the first-fit
strategy.The problem of small holes accumulating is solved with next fit
algorithm, which starts each search where the last one left off, wrapping around
to the beginning when the end of the list is reached (a form of one-way elevator)
THEORY:-
DEKKER'S ALGORITHM
Pseudocode
f0 := false
f1 := false
turn := 0 // or 1
p0: p1:
f0 := true f1 := true
while f1 { while f0 {
if turn ≠ 0 { if turn ≠ 1 {
f0 := false f1 := false
while turn ≠ 0 { while turn ≠ 1 {
} }
f0 := true f1 := true
} }
} }
Dekker's algorithm guarantees mutual exclusion, freedom from deadlock, and freedom
from starvation. Let us see why the last property holds. Suppose p0 is stuck inside the
"while f1" loop forever. There is freedom from deadlock, so eventually p1 will proceed to
its critical section and set turn = 0 (and the value of turn will remain unchanged as long as
p0 doesn't progress). Eventually p0 will break out of the inner "while turn ≠ 0" loop (if it
was ever stuck on it). After that it will set f0 := true and settle down to waiting for f1 to
become false (since turn = 0, it will never do the actions in the while loop). The next time
p1 tries to enter its critical section, it will be forced to execute the actions in its "while f0"
loop. In particular, it will eventually set f1 = false and get stuck in the "while turn ≠ 1"
loop (since turn remains 0). The next time control passes to p0, it will exit the "while f1"
loop and enter its critical section.
If the algorithm were modified by performing the actions in the "while f1" loop without
checking if turn = 0, then there is a possibility of starvation. Thus all the steps in the
algorithm are necessary.
One advantage of this algorithm is that it doesn't require special Test-and-set (atomic
read/modify/write) instructions and is therefore highly portable between languages and
machine architectures. One disadvantage is that it is limited to two processes and makes
use of Busy waiting instead of process suspension. (The use of busy waiting suggests that
processes should spend a minimum of time inside the critical section.)
Modern operating systems provide mutual exclusion primitives that are more general and
flexible than Dekker's algorithm. However, it should be noted that in the absence of
actual contention between the two processes, the entry and exit from critical section is
extremely efficient when Dekker's algorithm is used.
Many modern CPUs execute their instructions in an out-of-order fashion. This algorithm
won't work on SMP machines equipped with these CPUs without the use of memory
barriers.
Additionally, many optimizing compilers can perform transformations that will cause this
algorithm to fail regardless of the platform. In many languages, it is legal for a compiler
to detect that the flag variables f0 and f1 are never accessed in the loop. It can then
remove the writes to those variables from the loop, using a process called Loop-invariant
code motion. It would also be possible for many compilers to detect that the turn variable
is never modified by the inner loop, and perform a similar transformation, resulting in a
potential infinite loop. If either of these transformations is performed, the algorithm will
fail, regardless of architecture.
To alleviate this problem, volatile variables should be marked as modifiable outside the
scope of the currently executing context. For example, in Java, one would annotate these
variables as 'volatile'. Note however that the C/C++ "volatile" attribute only guarantees
that the compiler generates code with the proper ordering; it does not include the
necessary memory barriers to guarantee in-order execution of that code.
PETERSON'S ALGORITHM
The algorithm
flag[0] = 0
flag[1] = 0
turn =0
The algorithm uses two variables, flag and turn. A flag value of 1 indicates that the
process wants to enter the critical section. The variable turn holds the ID of the process
whose turn it is. Entrance to the critical section is granted for process P0 if P1 does not
want to enter its critical section or if P1 has given priority to P0 by setting turn to 0.
The algorithm does not satisfy completely the three essential criteria of mutual exclusion:
Mutual exclusion
P0 and P1 can never be in the critical section at the same time: If P0 is in its critical
section, then flag[0] is 1 and either flag[1] is 0 or turn is 0. In both cases, P1 cannot be in
its critical section.
Progress requirement
This criteria says that no process which is not in a critical section is allowed to block a
process which wants to enter the critical section.There is not strict alternating between P0
and P1.
Bounded waiting
A process will not wait longer than one turn for entrance to the critical section: After
giving priority to the other process, this process will run to completion and set its flag to
0, thereby allowing the other process to enter the critical section.
When working at the hardware level, Peterson's algorithm is typically not needed to
achieve atomic access. Some processors have special instructions, like test-and-set or
compare-and-swap that, by locking the memory bus, can be used to provide mutual
exclusion in SMP systems.
Most modern CPUs reorder memory accesses to improve execution efficiency. Such
processors invariably give some way to force ordering in a stream of memory accesses,
typically through a memory barrier instruction. Implementation of Peterson's and related
algorithms on processors which reorder memory accesses generally require use of such
operations to work correctly to keep sequential operations from happening in an incorrect
order. Note that reordering of memory accesses can happen even on processors that don't
reorder instructions (such as the PowerPC processor on the Xbox 360).
Most such CPU's also have some sort of guaranteed atomic operation, such as XCHG on
x86 processors and Load-Link/Store-Conditional on Alpha, MIPS, PowerPC, and other
architectures. These instructions are intended to provide a way to build synchronization
primitives more efficiently than can be done with pure shared memory approaches.
THEORY:-
The R-W problem is another classic problem for which design of synchronization and
concurrency mechanisms can be tested. The producer/consumer is another such problem;
the dining philosophers is another.
Definition
• There is a data area that is shared among a number of processes.
• Any number of readers may simultaneously write to the data area.
• Only one writer at a time may write to the data area.
• If a writer is writing to the data area, no reader may read it.
• If there is at least one reader reading the data area, no writer may write to it.
• Readers only read and writers only write
• A process that reads and writes to a data area must be considered a writer
(consider producer or consumer)
int readcount = 0;
semaphore wsem = 1; //
semaphore x = 1; //
void main(){
int p = fork();
if(p) reader; // assume multiple instances
else writer; // assume multiple instances
}
void reader(){ void writer(){
while(1){ while(1){
wait(x); wait(wsem)
readcount++; doWriting();
if (readcount==1) signal(wsem)
wait(wsem); }
signal(x); }
doReading();
wait(x);
readcount--;
if (readcount==0)
signal(wsem);
signal(x);
}
}
Once readers have gained control, a flow of reader processes could starve the writer
processes.
Rather have the case that when a write needs access, then hold up subsequent reading
requests until after the writing is done.
THEORY:-
The dining philosophers problem is sometimes explained using rice and chopsticks rather
than spaghetti and forks, as it is more intuitively obvious that two chopsticks are required
to begin eating.
The philosophers never speak to each other, which creates a dangerous possibility of
deadlock when every philosopher holds a left fork and waits perpetually for a right fork
(or vice versa).
Originally used as a means of illustrating the problem of deadlock, this system reaches
deadlock when there is a 'cycle of unwarranted requests'. In this case philosopher P1 waits
for the fork grabbed by philosopher P2 who is waiting for the fork of philosopher P3 and
so forth, making a circular chain.
Starvation (and the pun was intended in the original problem description) might also
occur independently of deadlock if a philosopher is unable to acquire both forks due to a
timing issue. For example there might be a rule that the philosophers put down a fork
after waiting five minutes for the other fork to become available and wait a further five
minutes before making their next attempt. This scheme eliminates the possibility of
deadlock (the system can always advance to a different state) but still suffers from the
problem of livelock. If all five philosophers appear in the dining room at exactly the same
time and each picks up their left fork at the same time the philosophers will wait five
minutes until they all put their forks down and then wait a further five minutes before
they all pick them up again.
The lack of available forks is an analogy to the lacking of shared resources in real
computer programming, a situation known as concurrency. Locking a resource is a
common technique to ensure the resource is accessed by only one program or chunk of
code at a time. When the resource a program is interested in is already locked by another
one, the program waits until it is unlocked. When several programs are involved in
locking resources, deadlock might happen, depending on the circumstances. For example,
one program needs two files to process. When two such programs lock one file each, both
programs wait for the other one to unlock the other file, which will never happen.
In general the dining philosophers problem is a generic and abstract problem used for
explaining various issues which arise in problems which hold mutual exclusion as a core
idea. For example, as in the above case deadlock/livelock is well explained with the
dining philosophers problem.
Solutions
Waiter solution
While the resource hierarchy solution avoids deadlocks, it is not always practical,
especially when the list of required resources is not completely known in advance. For
example, if a unit of work holds resources 3 and 5 and then determines it needs resource
2, it must release 5, then 3 before acquiring 2, and then it must re-acquire 3 and 5 in that
order. Computer programs that access large numbers of database records would not run
efficiently if they were required to release all higher-numbered records before accessing a
new record, making the method impractical for that purpose.
This is often the most practical solution for real world Computer Science problems; by
assigning a constant hierarchy of locks, and by enforcing the ordering of obtaining the
locks this problem can be avoided.
In 1984, K. Mani Chandy and J. Misra proposed a different solution to the dining
philosophers problem to allow for arbitrary agents (numbered P1, ..., Pn) to contend for an
arbitrary number of resources, unlike Dijkstra's solution. It is also completely distributed
and requires no central authority after initialization.
1. For every pair of philosophers contending for a resource, create a fork and give it
to the philosopher with the lower ID. Each fork can either be dirty or clean.
Initially, all forks are dirty.
2. When a philosopher wants to use a set of resources (i.e. eat), he must obtain the
forks from his contending neighbors. For all such forks he does not have, he sends
a request message.
3. When a philosopher with a fork receives a request message, he keeps the fork if it
is clean, but gives it up when it is dirty. If he sends the fork over, he cleans the
fork before doing so.
4. After a philosopher is done eating, all his forks become dirty. If another
philosopher had previously requested one of the forks, he cleans the fork and
sends it.
This solution also allows for a large degree of concurrency, and will solve an arbitrarily
large problem.
Algorithm:
One can consider the Dining Philosophers to be a deadlock problem, and can apply
deadlock prevention to it by numbering the forks and always acquiring the lowest
numbered fork first.
phil_state state[N];
semaphore mutex =1;
semaphore f[N]; /* one per fork, all 1*/
void get_forks(int i) {
int max, min;
void put_forks(int i) {
V(f[LEFT(i)]);
V(f[RIGHT(i)]);
}
void philosopher(int process) {
while(1) {
think();
get_forks(process);
eat();
put_forks(process);
}
}
THEORY:-
Deadlock Definition
A set of processes is deadlocked if each process in the set is waiting for an event
that only another process in the set can cause (including itself).
Banker's algorithm
The Banker's algorithm is a resource allocation & deadlock avoidance algorithm
developed by Edsger Dijkstra that tests for safety by simulating the allocation of pre-
determined maximum possible amounts of all resources, and then makes a "safe-state"
check to test for possible deadlock conditions for all other pending activities, before
deciding whether allocation should be allowed to continue.
The algorithm was developed in the design process for the THE operating system and
originally described (in Dutch) in EWD108[1]. The name is by analogy with the way that
bankers account for liquidity constraints.
The Banker's algorithm is run by the operating system whenever a process requests
resources.[2] The algorithm prevents deadlock by denying or postponing the request if it
determines that accepting the request could put the system in an unsafe state (one where
deadlock could occur).
Resources
Some of the resources that are tracked in real systems are memory, semaphores and
interface access.
Example
Assuming that the system distinguishes between four types of resources, (A, B, C and D),
the following is an example of how those resources could be distributed. Note that this
example shows the system at an instant before a new request for resources arrives. Also,
the types and number of resources are abstracted. Real systems, for example, would deal
with much larger quantities of each resource.
A state (as in the above example) is considered safe if it is possible for all processes to
finish executing (terminate). Since the system cannot know when a process will
terminate, or how many resources it will have requested by then, the system assumes that
all processes will eventually attempt to acquire their stated maximum resources and
terminate soon afterward. This is a reasonable assumption in most cases since the system
is not particularly concerned with how long each process runs (at least not from a
deadlock avoidance perspective). Also, if a process terminates without acquiring its
maximum resources, it only makes it easier on the system.
Given that assumption, the algorithm determines if a state is safe by trying to find a
hypothetical set of requests by the processes that would allow each to acquire its
maximum resources and then terminate (returning its resources to the system). Any state
where no such set exists is an unsafe state.
Pseudo-Code
P - set of processes
Example
We can show that the state given in the previous example is a safe state by showing that it
is possible for each process to acquire its maximum resources and then terminate.
Note that these requests and acquisitions are hypothetical. The algorithm generates them
to check the safety of the state, but no resources are actually given and no processes
actually terminate. Also note that the order in which these requests are generated – if
several can be fulfilled – doesn't matter, because all hypothetical requests let a process
terminate, thereby increasing the system's free resources.
For an example of an unsafe state, consider what would happen if process 2 were holding
1 more unit of resource B at the beginning.
Requests
When the system receives a request for resources, it runs the Banker's algorithm to
determine if it is safe to grant the request. The algorithm is fairly straight forward once
the distinction between safe and unsafe states is understood.
Example
Continuing the previous examples, assume process 3 requests 2 units of resource C.
1. Is this state safe? Assuming P1, P2, and P3 request more of resource B and C.
o P1 is unable to acquire enough B resources
o P2 is unable to acquire enough B resources
o P3 is unable to acquire enough C resources
o No process can acquire enough resources to terminate, so this state is not
safe
2. Since the state is unsafe, deny the request
Note that in this example, no process was able to terminate. It is possible that some
processes will be able to terminate, but not all of them. That would still be an unsafe
state.
}
REFFERENCE BOOK:- Silberschatz A., Galvin P., ” Operating Systems
Principles”, Willey.
THEORY:-
Linux commands
This guide will make you familiar with basic GNU/Linux shell commands. It is just an
introduction to complement Ubuntu's graphical tools.
Note that Linux is case sensitive. User, user, and USER are all different to Linux.
Starting a Terminal
To open a Terminal do as follow:
cd /
cd
or
cd ~
cd ..
To navigate to the previous directory (or back), type:
cd -
To navigate through multiple levels of directory at once, specify the full directory path
that you want to go to. For example, type:
cd /var/www
to go directly to the /www subdirectory of /var/. As another example, type:
cd ~/Desktop
to move you to the Desktop subdirectory inside your home directory.
pwd
pwd: The pwd command will show you which directory you're located in (pwd stands for
“print working directory”). For example, typing
pwd
in the Desktop directory, will show ~/Desktop.
GNOME Terminal also displays this information in the title bar of it's window.
ls
The ls command shows you the files in your current directory. Used with certain options,
you can see sizes of files, when files where made, and permissions of files. For example,
typing
ls ~
will show you the files that are in your home directory.
Output:
This is a test.
Hello world!
press CTRL+D to save fileTo display file contents, type
$ cat foot.txt
cp
The cp command makes a copy of a file for you. For example, type:
cp file foo
to make a exact copy of file and name it foo, but the file file will still be there.
mv
The mv command moves a file to a different location or will rename a file. Examples are
as follows:
mv file foo
will rename the file file to foo.
mv foo ~/Desktop
will move the file foo to your Desktop directory but will not rename it. You must specify
a new file name to rename a file.
If you are using mv with sudo you will not be able to use the ~ shortcut, but will have to
use the full pathnames to your files. This is because when you are working as root, ~ will
refer to the root account's home directory, not your own.
rm
Use the rm command to remove or delete a file in your directory. It will not work on
directories which have files in them.
mkdir
The mkdir command will allow you to create directories. For example, typing:
mkdir music
will create a music directory in the current directory.
df -h
will give information using megabytes (M) and gigabytes (G) instead of blocks (-h means
"human-readable").
free
The free command displays the amount of free and used memory in the system.
free -m
will give the information using megabytes, which is probably most useful for current
computers.
top
The top command displays information on your GNU/Linux system, running processes
and system resources, including CPU, RAM & swap usage and total number of tasks
being run. To exit top, press q.
uname
The uname command with the -a option, prints all system information, including machine
name, kernel name & version, and a few other details. Most useful for checking which
kernel you're using.
lsb_release
The lsb_release command with the -a option prints version information for the Linux
release you're running. For example, typing:
lsb_release -a
will give you:
ifconfig
The ifconfig command reports on your system's network interfaces.
addgroup newgroup
The above command will create a new group called newgroup.
adduser newuser
The above command will create a new user called newuser.
To assign a password for the new user use the passwd command:
passwd newuser
Options
The default behavior for a command may usually be modified by adding a -- option to the
command. The ls command, for example, has a -s option so that ls -s will include file
sizes in the listing. There is also a -h option to get those sizes in a "human readable"
format.
ls -sh
is exactly the same command as
ls -s -h
Most options have a long version, prefixed with two dashes instead of one, so even
ls --size --human-readable
is the same command.
Virtually all commands understand the -h (or --help) option which will produce a short
usage description of the command and it's options, then exit back to the command
prompt. Type
man -h
or
man --help
to see this in action.
Every command and nearly every application in Linux will have a man (manual) file, so
finding them is as simple as typing man command to bring up a longer manual entry for
the specified command. For example,
man mv
will bring up the mv (move) manual.
Move up and down the man file with the arrow keys, and quit back to the command
prompt with q.
man man
will bring up the manual entry for the man command, which is a good place to start.
man intro
is especially useful - it displays the "Introduction to user commands" which is a well-
written, fairly brief introduction to the Linux command line.
There are also info pages, which are generally more in-depth than man pages. Try
info info
for the introduction to info pages.
man -k foo, will search the man files for foo. Try
man -k nautilus
to see how this works.
man -f foo, searches only the titles of your system's man files. For example, try
man -f gnome
Save on typing
Up Arrow or Ctrl+p
Scrolls through the commands you've entered previously.
Enter
When you have the command you want.
Tab
A very useful feature. It autocompletes any commands or filenames, if there's only one
option, or else gives you a list of options.
When the cursor is where you want it in the line, typing inserts text, it doesn't overtype
what's already there.
Ctrl+a Home
Moves the cursor to the start of a line.
Ctrl+e End
Moves the cursor to the end of a line.
Ctrl+b
Moves to the beginning of the previous or current word.
Ctrl+k
Deletes from the current cursor position to the end of the line.
Ctrl+u
Deletes the whole of the current line.
Ctrl+w
Deletes the word before the cursor.
CommandlineHowto - longer and more complete than this basic guide, but still
unfinished.
For more detailed tutorials on the Linux command line, please see:
grep
What Is grep?
Grep is a command line tool that allows you to find a string in a file or stream. It can be
used with Regular expression to be more flexible at finding strings.
% grep 'STRING' filenameThis is OK but it does not show the true power of grep. First
this only looks at one file. A cool example of using grep with multiple file would be to
find all files in a directory that contains the name of a person. This can be easily
accomplished using a grep in the following way :
% grep 'Nicolas Kassis' *Notice the use of single quotes; This are not essential but in this
example it was required since the name contains a space. Double quotes could also have
been used in this example.
^
Denotes the beginning of a line
$
Denotes the end of a line
.
Matches any one characters
*
Matches 0 or more of the previous characters
.*
Matches any number or type of characters
[]
Matches on character for the one listed in the the Square brackets
[^]
Does not match any characters listed
\<, >/
Denotes the beginning and end (respectively) of a word
% grep "\<[A-Za-z].*" fileThis will search for any word which begins with a letter upper
or lower case.
BasicCommands
http://www.gnu.org/software/grep/doc/
http://en.wikipedia.org/wiki/Grep
THEORY:-
Shell Programming
Shell, command interpreter, is a program started up after opening of the user
session by the login process. The shell is active till the occurence of the
<EOT> character, which signals requests for termination of the execution and
for informing about that fact the operating system kernel.
Each user obtains their own separate instance of the sh. Program shprints
out the monit on the screen showing its readiness to read a next command.
The shell interpreter works based on the following scenario:
1. displays a prompt,
2. waits for entering text from a keyboard,
3. analyses the command line and _nds a command,
4. submit to the kernel execution of the command,
5. accepts an answer from the kernel and again waits for user input.
Commands
Submitting a command
$ [ VAR=value ... ] command_name [ arguments ... ]
$ echo $PATH
Built-in commands
$ PATH=$PATH:/usr/local/bin
$ export PATH
the set built-in without any parameters prints values of all variables,
the export built-in without any parameters prints values of all exported
environmental variables.
Special Parameters
Special parameters, these parameters may only be referenced, direct
assignment to them is not allowed.
$0 name of the command
$1 _rst argument of the scipt/ function
$2 second argument of the script/ function
$9 ninth argument of the scipt/ function
$* all positional arguments "$*" = "$1 $2 .."
$@ list of separated all positional arguments "$@" = "$1" "$2" ..
$# the number of arguments of some commands or given to the last set,
$? exit status of the most recently executed foreground command,
$! PID of the most recently started backgruond command.
$$ PID of the current shell,
$0-9 also: may be set by the set command.
Metacharacters
During resolving of _le names and grouping commands into bigger sets, special characters
called metacharacters are used.
* string without the "/" character,
? any single character,
[ ] one character from the given set,
[...-...] like [ ], with given scope from the first to the last,
[!..-...] any character except those within the given scope,
# start of a comment,
\ escape character, preserves the literal value of the following
character,
$ a value of a variable named with the following string,
; commands separator,
` ` string in accent characters executed as a command with the stdout
of the execution as a result of that quotation,
' ' preserves the literal value of each character within the quotes
" " preserves the literal value of all characters within the quotes,
with the exception of $, `, and \
Command interpretation
Steps in command interpretation under the sh shell:
1. entering line of characters,
2. division of the line into sequence of words, based on the IFS value,
3. substitution 1: subsitution of $ name. strings with variables' values,
$ b=/usr/user
$ ls -l prog.* > ${b}3
4. substitution 2: substitution of metacharacters * ? [ ] into appropriate _le names
in the current directory,
5. substitution 3: interpretation of accent quoted strings, ` `, as commands and
their execution
Grouping
special argument --,
commands may be grouped into brackets:
round brackets, ( commands-sequence; ) to group process which are to be
run as a separate sub-process; may be run in background (&),
curly brackets, commands-sequence; . just to group commands,
command end recognized with: <NL> ; &
Shell Scripts
Commands grouped together in a common text _le may be executed by:
$ sh [options] file_with_commands [arg ...]
After giving to the _le execute permision by command: chmod, np.:
$ chmod +x plik_z_cmd
one can submit it as a command without giving sh before the text _le name.
$ file_with_commands arg ...
Compound Commands
for steering of the shell script execution there are the following instructions: if,
for, while, until, case
it is possible to write if in a shorter way:
And-if && (when result equal to 0)
Or-if || (when result di_erent to 0)
$ cp x y && vi y
$ cp x y || cp z y Each command execution places in $? variable result of execution. The value
"0" means that the execution was succesful. Nonzero result means occurence of
some error during command execution.
'if' Instruction
the standard structure of the compound
if if_list
then then_list
[ elif elif_list; then then_list ] ...
[ else else_list ]
fi
the ifælist is executed. If its exit status is zero, the thenælist is
executed. Otherwise, each elifælist is executed in turn, and if its exit
status is zero, the corresponding thenælist is executed and the
command completes. Otherwise, the elseælist is executed, if present.
if cc -c p.c
then
ld p.o
else
echo "compilation error" 1>&2
fi
'case' Instruction
the standard structure of the compound
case word in
pattern1) list1;;
pattern2) list2;;
*) list_default;;
esac
a case command _rst expands word, and tries to match it against each
pattern in turn, using the same matching rules as for path-name
expansion.
an example
case $# in
0) echo 'usage: man name' 1>&2; exit 2;;
Loop Instructions
In the sh command interpreter there are three types of loop instructions:
for name [ in word ] ; do list ; done
while list; do list; done
until list; do list; done
for instruction, executed once for each element of the forælist,
while instruction, with loop executed while the condition returns 0 exit
code (while condition is ful_lled),
until instruction, with loop executed until the condition _nally returns 0
exit code (loop executed while condition is not ful_lled),
instructions continue and break may be used inside loops
#!/bin/sh
for i in /tmp /usr/tmp
do
rm -rf $i/*
done
How to create a shell program file:
1. create a file with extension in only editor a prog1.sh.
2. Enter commands similar to they are typed at $ prompt.
3. save file using CTRL+a option.
3. For execution of shell program,we have to write following program.
along with filename(.sh extension) $sh prog1.sh.
5. read command used to get value to var throught keyboard.
6. Echo command used to disply characters on the screen.
syntax: echo "HELLO"
display: HELLO(on screen).
7. Explanation of command used in shell program.
Different examples
$ cat file.dat | while read x y z
do
echo $x $y $z
done
#!/bin/sh
i=1
while [ $i -le 5 ]; do
echo $i
i=`expr $i + 1`
done
$ who -r
. ru-level 2 Aug 21 16:58 2 0 S
$ set `who -r`
$ echo $6
16:58
THEORY:-
Banker's algorithm
The Banker's algorithm is a resource allocation & deadlock avoidance algorithm
developed by Edsger Dijkstra that tests for safety by simulating the allocation of pre-
determined maximum possible amounts of all resources, and then makes a "safe-state"
check to test for possible deadlock conditions for all other pending activities, before
deciding whether allocation should be allowed to continue.
The algorithm was developed in the design process for the THE operating system and
originally described (in Dutch) in EWD108[1]. The name is by analogy with the way that
bankers account for liquidity constraints.
Algorithm
The Banker's algorithm is run by the operating system whenever a process requests
resources. The algorithm prevents deadlock by denying or postponing the request if it
determines that accepting the request could put the system in an unsafe state (one where
deadlock could occur).
Resources
Some of the resources that are tracked in real systems are memory, semaphores and
interface access.
Example
Assuming that the system distinguishes between four types of resources, (A, B, C and D),
the following is an example of how those resources could be distributed. Note that this
example shows the system at an instant before a new request for resources arrives. Also,
the types and number of resources are abstracted. Real systems, for example, would deal
with much larger quantities of each resource.
Bankers algorithm (safe/unsafe states) A state is said to be a safe state if there exists a
sequence of other states that leads to all the customers getting loans up to their credit limits
(all the processes getting all their resources and terminating). An unsafe state does not have to
lead to deadlock, since a customer might not need the entire credit line available, but the
banker cannot count on this behaviour.
A state (as in the above example) is considered safe if it is possible for all processes to
finish executing (terminate). Since the system cannot know when a process will
terminate, or how many resources it will have requested by then, the system assumes that
all processes will eventually attempt to acquire their stated maximum resources and
terminate soon afterward. This is a reasonable assumption in most cases since the system
is not particularly concerned with how long each process runs (at least not from a
deadlock avoidance perspective). Also, if a process terminates without acquiring its
maximum resources, it only makes it easier on the system.
Given that assumption, the algorithm determines if a state is safe by trying to find a
hypothetical set of requests by the processes that would allow each to acquire its
maximum resources and then terminate (returning its resources to the system). Any state
where no such set exists is an unsafe state.
Pseudo-Code
P - set of processes
while (P != ∅) {
found = FALSE;
foreach (p P) {
if (Mp − Cp ≤ A) {
/* p can obtain all it needs. */
/* assume it does so, terminates, and */
/* releases what it already has. */
A = A + Cp ;
P = P − {p};
found = TRUE;
}
}
if (! found) return FAIL;
}
return OK;
Example
We can show that the state given in the previous example is a safe state by showing that it
is possible for each process to acquire its maximum resources and then terminate.
Note that these requests and acquisitions are hypothetical. The algorithm generates them
to check the safety of the state, but no resources are actually given and no processes
actually terminate. Also note that the order in which these requests are generated – if
several can be fulfilled – doesn't matter, because all hypothetical requests let a process
terminate, thereby increasing the system's free resources.
For an example of an unsafe state, consider what would happen if process 2 were holding
1 more unit of resource B at the beginning.
Requests
When the system receives a request for resources, it runs the Banker's algorithm to
determine if it is safe to grant the request. The algorithm is fairly straight forward once
the distinction between safe and unsafe states is understood.
Example
Continuing the previous examples, assume process 3 requests 2 units of resource C.
1. Is this state safe? Assuming P1, P2, and P3 request more of resource B and C.
o P1 is unable to acquire enough B resources
o P2 is unable to acquire enough B resources
o P3 is unable to acquire enough C resources
o No process can acquire enough resources to terminate, so this state is not
safe
2. Since the state is unsafe, deny the request